title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 6. Installing OpenShift DR Cluster Operator on Managed clusters | Chapter 6. Installing OpenShift DR Cluster Operator on Managed clusters Procedure On each managed cluster, navigate to OperatorHub and filter for OpenShift DR Cluster Operator . Follow the screen instructions to install the operator into the project openshift-dr-system . Note The OpenShift DR Cluster Operator must be installed on both the Primary managed cluster and Secondary managed cluster . Configure SSL access between the s3 endpoints so that metadata can be stored on the alternate cluster in a MCG object bucket using a secure transport protocol and in the Hub cluster for verifying access to the object buckets. Note If all of your OpenShift clusters are deployed using a signed and trusted set of certificates for your environment then this section can be skipped. Extract the ingress certificate for the Primary managed cluster and save the output to primary.crt . Extract the ingress certificate for the Secondary managed cluster and save the output to secondary.crt . Create a new ConfigMap to hold certificate bundle of the remote cluster with file name cm-clusters-crt.yaml on the Primary managed cluster , Secondary managed cluster and the Hub cluster . Note There could be more or less than three certificates for each cluster as shown in this example file. Run the following command on the Primary managed cluster , Secondary managed cluster , and the Hub cluster to create the file. Example output: Important For the Hub cluster to verify access to the object buckets using the DRPolicy resource, the same ConfigMap , cm-clusters-crt.yaml , must be created on the Hub cluster . Modify default Proxy cluster resource. Copy and save the following content into the new YAML file proxy-ca.yaml . Apply this new file to the default proxy resource on the Primary managed cluster , Secondary managed cluster , and the Hub cluster . Example output: Retrieve Multicloud Object Gateway (MCG) keys and external S3 endpoint. Check if MCG is installed on the Primary managed cluster and the Secondary managed cluster , and if Phase is Ready . Example output: Copy the following YAML file to filename odrbucket.yaml . Create a MCG bucket odrbucket on both the Primary managed cluster and the Secondary managed cluster . Example output: Extract the odrbucket OBC access key for each managed cluster as their base-64 encoded values by using the following command. Example output: Extract the odrbucket OBC secret key for each managed cluster as their base-64 encoded values by using the following command. Example output: Create S3 Secrets for Managed clusters. Now that the necessary MCG information has been extracted there must be new Secrets created on the Primary managed cluster and the Secondary managed cluster . These new Secrets stores the MCG access and secret keys for both managed clusters. Note OpenShift DR requires one or more S3 stores to store relevant cluster data of a workload from the managed clusters and to orchestrate a recovery of the workload during failover or relocate actions. These instructions are applicable for creating the necessary object bucket(s) using Multicloud Gateway (MCG). MCG should already be installed as a result of installing OpenShift Data Foundation. Copy the following S3 secret YAML format for the Primary managed cluster to filename odr-s3secret-primary.yaml . Replace <primary cluster base-64 encoded access key> and <primary cluster base-64 encoded secret access key> with actual values retrieved in an earlier step. Create this secret on the Primary managed cluster and the Secondary managed cluster . Example output: Copy the following S3 secret YAML format for the Secondary managed cluster to filename odr-s3secret-secondary.yaml . Replace <secondary cluster base-64 encoded access key> and <secondary cluster base-64 encoded secret access key> with actual values retrieved in step 4. Create this secret on the Primary managed cluster and the Secondary managed cluster . Example output: Important The values for the access and secret key must be base-64 encoded . The encoded values for the keys were retrieved in an earlier step. Configure OpenShift DR Cluster Operator ConfigMaps on each of the managed clusters. Search for the external S3 endpoint s3CompatibleEndpoint or route for MCG on each managed cluster by using the following command. Example output: Important The unique s3CompatibleEndpoint route or s3-openshift-storage.apps.<primary clusterID>.<baseDomain> and s3-openshift-storage.apps.<secondary clusterID>.<baseDomain> must be retrieved for both the Primary managed cluster and Secondary managed cluster respectively. Search for the odrbucket OBC bucket name. Example output: Important The unique s3Bucket name odrbucket-<your value1> and odrbucket-<your value2> must be retrieved on both the Primary managed cluster and Secondary managed cluster respectively. Modify the ConfigMap ramen-dr-cluster-operator-config to add the new content. Add the following new content starting at s3StoreProfiles to the ConfigMap on the Primary managed cluster and the Secondary managed cluster . | [
"oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath=\"{['data']['ca-bundle\\.crt']}\" > primary.crt",
"oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath=\"{['data']['ca-bundle\\.crt']}\" > secondary.crt",
"apiVersion: v1 data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- <copy contents of cert1 from primary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert2 from primary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert3 primary.crt here> -----END CERTIFICATE---- -----BEGIN CERTIFICATE----- <copy contents of cert1 from secondary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert2 from secondary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert3 from secondary.crt here> -----END CERTIFICATE----- kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config",
"oc create -f cm-clusters-crt.yaml",
"configmap/user-ca-bundle created",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: user-ca-bundle",
"oc apply -f proxy-ca.yaml",
"proxy.config.openshift.io/cluster configured",
"oc get noobaa -n openshift-storage",
"NAME MGMT-ENDPOINTS S3-ENDPOINTS IMAGE PHASE AGE noobaa [\"https://10.70.56.161:30145\"] [\"https://10.70.56.84:31721\"] quay.io/rhceph-dev/mcg-core@sha256:c4b8857ee9832e6efc5a8597a08b81730b774b2c12a31a436e0c3fadff48e73d Ready 27h",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: odrbucket namespace: openshift-dr-system spec: generateBucketName: \"odrbucket\" storageClassName: openshift-storage.noobaa.io",
"oc create -f odrbucket.yaml",
"objectbucketclaim.objectbucket.io/odrbucket created",
"oc get secret odrbucket -n openshift-dr-system -o jsonpath='{.data.AWS_ACCESS_KEY_ID}{\"\\n\"}'",
"cFpIYTZWN1NhemJjbEUyWlpwN1E=",
"oc get secret odrbucket -n openshift-dr-system -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}{\"\\n\"}'",
"V1hUSnMzZUoxMHRRTXdGMU9jQXRmUlAyMmd5bGwwYjNvMHprZVhtNw==",
"apiVersion: v1 data: AWS_ACCESS_KEY_ID: <primary cluster base-64 encoded access key> AWS_SECRET_ACCESS_KEY: <primary cluster base-64 encoded secret access key> kind: Secret metadata: name: odr-s3secret-primary namespace: openshift-dr-system",
"oc create -f odr-s3secret-primary.yaml",
"secret/odr-s3secret-primary created",
"apiVersion: v1 data: AWS_ACCESS_KEY_ID: <secondary cluster base-64 encoded access key> AWS_SECRET_ACCESS_KEY: <secondary cluster base-64 encoded secret access key> kind: Secret metadata: name: odr-s3secret-secondary namespace: openshift-dr-system",
"oc create -f odr-s3secret-secondary.yaml",
"secret/odr-s3secret-secondary created",
"oc get route s3 -n openshift-storage -o jsonpath --template=\"https://{.spec.host}{'\\n'}\"",
"https://s3-openshift-storage.apps.perf1.example.com",
"oc get configmap odrbucket -n openshift-dr-system -o jsonpath='{.data.BUCKET_NAME}{\"\\n\"}'",
"odrbucket-2f2d44e4-59cb-4577-b303-7219be809dcd",
"oc edit configmap ramen-dr-cluster-operator-config -n openshift-dr-system",
"[...] data: ramen_manager_config.yaml: | apiVersion: ramendr.openshift.io/v1alpha1 kind: RamenConfig [...] ramenControllerType: \"dr-cluster\" ### Start of new content to be added s3StoreProfiles: - s3ProfileName: s3-primary s3CompatibleEndpoint: https://s3-openshift-storage.apps.<primary clusterID>.<baseDomain> s3Region: primary s3Bucket: odrbucket-<your value1> s3SecretRef: name: odr-s3secret-primary namespace: openshift-dr-system - s3ProfileName: s3-secondary s3CompatibleEndpoint: https://s3-openshift-storage.apps.<secondary clusterID>.<baseDomain> s3Region: secondary s3Bucket: odrbucket-<your value2> s3SecretRef: name: odr-s3secret-secondary namespace: openshift-dr-system [...]"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/configuring_openshift_data_foundation_for_regional-dr_with_advanced_cluster_management/installing-openshift-dr-cluster-operator-on-managed-clusters_rhodf |
14.19. Setting Schedule Parameters | 14.19. Setting Schedule Parameters schedinfo allows scheduler parameters to be passed to guest virtual machines. The following command format should be used: Each parameter is explained below: domain - this is the guest virtual machine domain --set - the string placed here is the controller or action that is to be called. Additional parameters or values if required should be added as well. --current - when used with --set , will use the specified set string as the current scheduler information. When used without will display the current scheduler information. --config - - when used with --set , will use the specified set string on the reboot. When used without will display the scheduler information that is saved in the configuration file. --live - when used with --set , will use the specified set string on a guest virtual machine that is currently running. When used without will display the configuration setting currently used by the running virtual machine The scheduler can be set with any of the following parameters: cpu_shares , vcpu_period and vcpu_quota . Example 14.5. schedinfo show This example shows the shell guest virtual machine's schedule information Example 14.6. schedinfo set In this example, the cpu_shares is changed to 2046. This effects the current state and not the configuration file. | [
"virsh schedinfo domain --set --weight --cap --current --config --live",
"virsh schedinfo shell Scheduler : posix cpu_shares : 1024 vcpu_period : 100000 vcpu_quota : -1",
"virsh schedinfo --set cpu_shares=2046 shell Scheduler : posix cpu_shares : 2046 vcpu_period : 100000 vcpu_quota : -1"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-Managing_guest_virtual_machines_with_virsh-Setting_schedule_parameters |
2.5. Tuning the Index | 2.5. Tuning the Index 2.5.1. Near-Realtime Index Manager By default, each update is immediately flushed into the index. In order to achieve better throughput, the updates can be batched. However, this can result in a lag between the update and query -- the query can see outdated data. If this is acceptable, you can use the Near-Realtime Index Manager by setting the following. Report a bug 2.5.2. Tuning Infinispan Directory Lucene directory uses three caches to store the index: Data cache Metadata cache Locking cache Configuration for these caches can be set explicitly, specifying the cache names as in the example below, and configuring those caches as usual. All of these caches must be clustered unless Infinispan Directory is used in local mode. Example 2.7. Tuning the Infinispan Directory Report a bug 2.5.3. Per-Index Configuration The indexing properties in examples above apply for all indices - this is because we use the default. prefix for each property. To specify different configuration for each index, replace default with the index name. By default, this is the full class name of the indexed object, however you can override the index name in the @Indexed annotation. Report a bug | [
"<property name=\"default.indexmanager\" value=\"near-real-time\" />",
"<namedCache name=\"indexedCache\"> <clustering mode=\"DIST\"/> <indexing enabled=\"true\"> <properties> <property name=\"default.indexmanager\" value=\"org.infinispan.query.indexmanager.InfinispanIndexManager\" /> <property name=\"default.metadata_cachename\" value=\"lucene_metadata_repl\"/> <property name=\"default.data_cachename\" value=\"lucene_data_dist\"/> <property name=\"default.locking_cachename\" value=\"lucene_locking_repl\"/> </properties> </indexing> </namedCache> <namedCache name=\"lucene_metadata_repl\"> <clustering mode=\"REPL\"/> </namedCache> <namedCache name=\"lucene_data_dist\"> <clustering mode=\"DIST\"/> </namedCache> <namedCache name=\"lucene_locking_repl\"> <clustering mode=\"REPL\"/> </namedCache>"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/infinispan_query_guide/sect-tuning_the_index |
Chapter 1. Introduction to Red Hat JBoss Web Server installation | Chapter 1. Introduction to Red Hat JBoss Web Server installation Red Hat JBoss Web Server is a fully integrated and certified set of components for hosting Java web applications. Red Hat JBoss Web Server provides a fully supported implementation of the Apache Tomcat Servlet container and the Tomcat native library. Note If you need clustering or session replication support for Java applications, use Red Hat JBoss Enterprise Application Platform (JBoss EAP). 1.1. JBoss Web Server components JBoss Web Server includes components such as the Apache Tomcat Servlet container, Tomcat native library, Tomcat vault, mod_cluster library, Apache Portable Runtime (APR), and OpenSSL. Apache Tomcat Apache Tomcat is a servlet container in accordance with the Java Servlet Specification. JBoss Web Server 6.x contains Apache Tomcat 10.1. Tomcat native library The Tomcat native library improves Tomcat scalability, performance, and integration with native server technologies. Tomcat vault Tomcat vault is an extension for JBoss Web Server that is used for securely storing passwords and other sensitive information used by a JBoss Web Server. Mod_cluster The mod_cluster library enables communication between Apache Tomcat and the mod_proxy_cluster module of the Apache HTTP Server. The mod_cluster library enables you to use the Apache HTTP Server as a load balancer for JBoss Web Server. For more information about configuring mod_cluster , or for information about installing and configuring alternative load balancers such as mod_jk and mod_proxy , see the HTTP Connectors and Load Balancing Guide . Apache Portable Runtime The Apache Portable Runtime (APR) provides an OpenSSL-based TLS implementation for the HTTP connectors. OpenSSL OpenSSL is a software library that implements the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. OpenSSL includes a basic cryptographic library. For a full list of components that Red Hat JBoss Web Server supports, see the JBoss Web Server Component Details page. 1.2. Differences between the Apache Tomcat distributions that Red Hat provides Both Red Hat JBoss Web Server and Red Hat Enterprise Linux (RHEL) provide separate distributions of Apache Tomcat. However, JBoss Web Server offers distinct benefits compared to the RHEL distribution of Apache Tomcat by including an integrated and certified set of additional components and features. JBoss Web Server also provides more frequent software and security updates. Note RHEL provides a distribution of Apache Tomcat on RHEL 7, RHEL 8.8 or later, and RHEL 9.2 or later only. For RHEL 8.0 through 8.7 and RHEL 9.0 through 9.1, the RHEL platform subscriptions do not provide a distribution of Apache Tomcat. On these operating system versions, JBoss Web Server is the only Apache Tomcat distribution that Red Hat provides, which is available as part of the Middleware Runtimes subscription. Apache Tomcat versions Consider the following version information for the Apache Tomcat distributions that are available with JBoss Web Server and RHEL: Apache Tomcat versions available with RHEL The RHEL 7 tomcat package is based on the community version of Apache Tomcat 7. The RHEL 8.x and RHEL 9.x tomcat package is based on the community version of Apache Tomcat 9. The tomcat package is available with RHEL 8.8 or later and RHEL 9.2 or later only. Apache Tomcat versions available with JBoss Web Server JBoss Web Server 3.1 provides a distribution of Apache Tomcat 7 and Apache Tomcat 8 together with an integrated and certified set of additional components and features. However, Red Hat no longer fully supports or maintains JBoss Web Server 3.1, which is currently in extended life cycle support (ELS) phase 2 with a planned end-of-life date of December 2028. JBoss Web Server 5.x provides a distribution of Apache Tomcat 9 that Red Hat fully tests and supports together with an integrated and certified set of additional components and features. Red Hat plans to end full support for JBoss Web Server 5.x on July 31, 2024. Red Hat will provide maintenance support until July 31, 2025 followed by extended life cycle support (ELS) phase 1 with a planned end-of-life date of July 2027. JBoss Web Server 6.x provides a distribution of Apache Tomcat 10.1 that Red Hat fully tests and supports together with an integrated and certified set of additional components and features. For more information about product life cycle phases and available support levels, see Life Cycle Phases . For more information about Apache Tomcat versions, see Apache Tomcat versions supported by Red Hat . Note Red Hat does not provide support for community releases of Apache Tomcat. Differences between JBoss Web Server and RHEL distributions of Apache Tomcat Consider the following differences between JBoss Web Server and the RHEL distribution of Apache Tomcat: JBoss Web Server RHEL distribution of Apache Tomcat Supports installation from archive files or RPM packages on RHEL versions 8 and 9. Note Red Hat does not provide a distribution of JBoss Web Server 6.x on RHEL 7. Supports installation from RPM packages only on RHEL 7, RHEL 8.8, and RHEL 9.2 or later. Supports installation from archive files on supported Windows Server platforms. Not applicable Offers developers support for creating and deploying back-end web applications and large-scale websites that can service client requests from Apache HTTP Server proxies in a secure and stable environment. Offers administrators support for deploying and running Apache Tomcat instances on a RHEL system. Provides a fully tested and supported distribution of Apache Tomcat that includes the following integrated and certified set of additional features and functionality: Fully tested and certified integration with the Apache HTTP Server for the forwarding and load-balancing of web client requests to back-end web applications by using a mod_proxy , mod_jk , or mod_proxy_cluster connector Tomcat native library for improving Apache Tomcat scalability, performance, and integration with native server technologies Tomcat Vault extension for masking passwords and other sensitive strings and securely storing sensitive information in an encrypted Java keystore Mod_cluster library for enabling communication and intelligent load-balancing of web traffic between the mod_proxy_cluster module of the Apache HTTP Server and back-end JBoss Web Server worker nodes Apache Portable Runtime (APR) library for providing an OpenSSL-based TLS implementation for the HTTP connectors Federal Information Processing Standards (FIPS) compliance Support for JBoss Web Server in Red Hat OpenShift environments JBoss Web Server Operator for managing OpenShift container images and for creating, configuring, managing, and seamlessly upgrading instances of web server applications in Red Hat OpenShift environments Automated installation of JBoss Web Server by using a Red Hat Ansible certified content collection Offers developers support for creating and deploying back-end web applications and large-scale websites that can service client requests from Apache HTTP Server proxies in a secure and stable environment Provides only a standard distribution of Apache Tomcat with infrequent software updates that is based on the community version. Provides a set of Maven repository artifacts in a jws-6. X . x -maven-repository.zip file that you can download from the Red Hat Customer Portal. You can use these artifacts in the web application archive (WAR) files for your application deployment projects. Not applicable Also includes libraries for embedded Tomcat in the jws-6. X . x -maven-repository.zip file, which enables you to build web applications by using embedded Tomcat with a fully supported Apache Tomcat version. Not applicable Differences between the JBoss Web Server and RHEL documentation sets The JBoss Web Server documentation set is broader and more comprehensive than the RHEL documentation for the tomcat package: JBoss Web Server includes a Red Hat JBoss Web Server 6.0. x Documentation archive file that contains API documentation for Apache Tomcat 10.1 and Tomcat Vault. You can download this archive file from the Red Hat Customer Portal . The JBoss Web Server product documentation page provides information on all of the following types of use cases: Performing a standard installation of JBoss Web Server from an archive file or RPM package on supported operating systems. Configuring JBoss Web Server for use with Apache HTTP Server connectors and load-balancers such as mod_jk and mod_proxy_cluster . Enabling automated installations of JBoss Web Server by using a Red Hat Ansible certified content collection. Using JBoss Web Server in a Red Hat OpenShift environment. Installing and using the JBoss Web Server Operator for OpenShift. Configuring JBoss Web Server to support features such as the HTTP/2 protocol, Tomcat Vault, and FIPS compliance. 1.3. JBoss Web Server operating systems and configurations Red Hat JBoss Web Server supports different versions of the Red Hat Enterprise Linux and Microsoft Windows operating systems. Additional resources JBoss Web Server 6 Supported Configurations 1.4. JBoss Web Server installation methods You can install Red Hat JBoss Web Server on supported Red Hat Enterprise Linux and Microsoft Windows systems by using archive installation files that are available for each platform. You can also install JBoss Web Server on supported Red Hat Enterprise Linux systems by using RPM packages. The following components are included in the archive installation files. These components are the core parts of a JBoss Web Server installation. jws-6.0.0-application-server.zip Apache Tomcat 10.1 mod_cluster Tomcat vault jws-6.0.0-optional-native-components- <platform> - <architecture> .zip Platform-specific utilities 1.5. JBoss Web Server component documentation bundle JBoss Web Server includes an additional documentation bundle that includes the original vendor documentation for each component. You can download this documentation bundle, jws-6.0.0-docs.zip , from the Red Hat Customer Portal . The documentation bundle contains additional documentation for the following components: Apache Tomcat Tomcat native library Tomcat vault | null | https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/installation_guide/assembly_introduction-to-red-hat-jboss-web-server-installation_jboss_web_server_installation_guide |
4.3. Eviction and Expiration Comparison | 4.3. Eviction and Expiration Comparison Expiration is a top-level construct in Red Hat JBoss Data Grid, and is represented in the global configuration, as well as the cache API . Eviction is limited to the cache instance it is used in, whilst expiration is cluster-wide. Expiration life spans ( lifespan ) and idle time ( maxIdle in Library Mode and max-idle in Remote Client-Server Mode) values are replicated alongside each cache entry. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/eviction_and_expiration_comparison1 |
Chapter 1. Deploying a Spring Boot application with Argo CD | Chapter 1. Deploying a Spring Boot application with Argo CD With Argo CD, you can deploy your applications to the OpenShift Container Platform cluster either by using the Argo CD dashboard or by using the oc tool. 1.1. Creating an application by using the Argo CD dashboard Argo CD provides a dashboard which allows you to create applications. Prerequisites You have logged in to the OpenShift Container Platform cluster as an administrator. You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster. You have logged in to Argo CD instance. Procedure In the Argo CD dashboard, click NEW APP to add a new Argo CD application. For this workflow, create a spring-petclinic application with the following configurations: Application Name spring-petclinic Project default Sync Policy Automatic Repository URL https://github.com/redhat-developer/openshift-gitops-getting-started Revision HEAD Path app Destination https://kubernetes.default.svc Namespace spring-petclinic Click CREATE to create your application. Open the Administrator perspective of the web console and expand Administration Namespaces . Search for and select the namespace, then enter argocd.argoproj.io/managed-by=openshift-gitops in the Label field so that the Argo CD instance in the openshift-gitops namespace can manage your namespace. 1.2. Creating an application by using the oc tool You can create Argo CD applications in your terminal by using the oc tool. Prerequisites You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster. You have logged in to an Argo CD instance. Procedure Download the sample application : USD git clone [email protected]:redhat-developer/openshift-gitops-getting-started.git Create the application: USD oc create -f openshift-gitops-getting-started/argo/app.yaml Run the oc get command to review the created application: USD oc get application -n openshift-gitops Add a label to the namespace your application is deployed in so that the Argo CD instance in the openshift-gitops namespace can manage it: USD oc label namespace spring-petclinic argocd.argoproj.io/managed-by=openshift-gitops 1.3. Verifying Argo CD self-healing behavior Argo CD constantly monitors the state of deployed applications, detects differences between the specified manifests in Git and live changes in the cluster, and then automatically corrects them. This behavior is referred to as self-healing. You can test and observe the self-healing behavior in Argo CD. Prerequisites You have installed the Red Hat OpenShift GitOps Operator on your OpenShift Container Platform cluster. You have logged in to an Argo CD instance. The sample app-spring-petclinic application is deployed and configured. Procedure In the Argo CD dashboard, verify that your application has the Synced status. Click the app-spring-petclinic tile in the Argo CD dashboard to view the application resources that are deployed to the cluster. In the OpenShift Container Platform web console, navigate to the Developer perspective. Modify the Spring PetClinic deployment and commit the changes to the app/ directory of the Git repository. Argo CD will automatically deploy the changes to the cluster. Fork the OpenShift GitOps getting started repository . In the deployment.yaml file, change the failureThreshold value to 5 . In the deployment cluster, run the following command to verify the changed value of the failureThreshold field: USD oc edit deployment spring-petclinic -n spring-petclinic Test the self-healing behavior by modifying the deployment on the cluster and scaling it up to two pods while watching the application in the OpenShift Container Platform web console. Run the following command to modify the deployment: USD oc scale deployment spring-petclinic --replicas 2 -n spring-petclinic In the OpenShift Container Platform web console, notice that the deployment scales up to two pods and immediately scales down again to one pod. Argo CD detected a difference from the Git repository and auto-healed the application on the OpenShift Container Platform cluster. In the Argo CD dashboard, click the app-spring-petclinic tile APP DETAILS EVENTS . The EVENTS tab displays the following events: Argo CD detecting out of sync deployment resources on the cluster and then resyncing the Git repository to correct it. | [
"git clone [email protected]:redhat-developer/openshift-gitops-getting-started.git",
"oc create -f openshift-gitops-getting-started/argo/app.yaml",
"oc get application -n openshift-gitops",
"oc label namespace spring-petclinic argocd.argoproj.io/managed-by=openshift-gitops",
"oc edit deployment spring-petclinic -n spring-petclinic",
"oc scale deployment spring-petclinic --replicas 2 -n spring-petclinic"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.12/html/argo_cd_applications/deploying-a-spring-boot-application-with-argo-cd |
Chapter 241. Netty HTTP Component (deprecated) | Chapter 241. Netty HTTP Component (deprecated) Available as of Camel version 2.12 The netty-http component is an extension to Netty component to facilitate HTTP transport with Netty . This camel component supports both producer and consumer endpoints. Warning This component is deprecated. You should use Netty4 HTTP . Note Stream . Netty is stream based, which means the input it receives is submitted to Camel as a stream. That means you will only be able to read the content of the stream once . If you find a situation where the message body appears to be empty or you need to access the data multiple times (eg: doing multicasting, or redelivery error handling) you should use Stream caching or convert the message body to a String which is safe to be re-read multiple times. Notice Netty4 HTTP reads the entire stream into memory using io.netty.handler.codec.http.HttpObjectAggregator to build the entire full http message. But the resulting message is still a stream based message which is readable once. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-netty-http</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 241.1. URI format The URI scheme for a netty component is as follows netty-http:http://0.0.0.0:8080[?options] You can append query options to the URI in the following format, ?option=value&option=value&... Note Query parameters vs endpoint options . You may be wondering how Camel recognizes URI query parameters and endpoint options. For example you might create endpoint URI as follows - netty-http:http//example.com?myParam=myValue&compression=true . In this example myParam is the HTTP parameter, while compression is the Camel endpoint option. The strategy used by Camel in such situations is to resolve available endpoint options and remove them from the URI. It means that for the discussed example, the HTTP request sent by Netty HTTP producer to the endpoint will look as follows - http//example.com?myParam=myValue , because compression endpoint option will be resolved and removed from the target URL. Keep also in mind that you cannot specify endpoint options using dynamic headers (like CamelHttpQuery ). Endpoint options can be specified only at the endpoint URI definition level (like to or from DSL elements). 241.2. HTTP Options A lot more options This component inherits all the options from Netty . So make sure to look at the Netty documentation as well. Notice that some options from Netty is not applicable when using this Netty HTTP component, such as options related to UDP transport. The Netty HTTP component supports 7 options, which are listed below. Name Description Default Type nettyHttpBinding (advanced) To use a custom org.apache.camel.component.netty.http.NettyHttpBinding for binding to/from Netty and Camel Message API. NettyHttpBinding configuration (common) To use the NettyConfiguration as configuration when creating endpoints. NettyHttpConfiguration headerFilterStrategy (advanced) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter headers. HeaderFilterStrategy securityConfiguration (security) Refers to a org.apache.camel.component.netty.http.NettyHttpSecurityConfiguration for configuring secure web resources. NettyHttpSecurity Configuration useGlobalSslContext Parameters (security) Enable usage of global SSL context parameters. false boolean maximumPoolSize (advanced) The core pool size for the ordered thread pool, if its in use. The default value is 16. 16 int resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Netty HTTP endpoint is configured using URI syntax: with the following path and query parameters: 241.2.1. Path Parameters (4 parameters): Name Description Default Type protocol Required The protocol to use which is either http or https String host Required The local hostname such as localhost, or 0.0.0.0 when being a consumer. The remote HTTP server hostname when using producer. String port The host port number int path Resource path String 241.2.2. Query Parameters (78 parameters): Name Description Default Type bridgeEndpoint (common) If the option is true, the producer will ignore the Exchange.HTTP_URI header, and use the endpoint's URI for request. You may also set the throwExceptionOnFailure to be false to let the producer send all the fault response back. The consumer working in the bridge mode will skip the gzip compression and WWW URL form encoding (by adding the Exchange.SKIP_GZIP_ENCODING and Exchange.SKIP_WWW_FORM_URLENCODED headers to the consumed exchange). false boolean disconnect (common) Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer. false boolean keepAlive (common) Setting to ensure socket is not closed due to inactivity true boolean reuseAddress (common) Setting to facilitate socket multiplexing true boolean sync (common) Setting to set endpoint as one-way or request-response true boolean tcpNoDelay (common) Setting to improve TCP protocol performance true boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean matchOnUriPrefix (consumer) Whether or not Camel should try to find a target consumer by matching the URI prefix if no exact match is found. false boolean send503whenSuspended (consumer) Whether to send back HTTP status code 503 when the consumer has been suspended. If the option is false then the Netty Acceptor is unbound when the consumer is suspended, so clients cannot connect anymore. true boolean backlog (consumer) Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting. int bossCount (consumer) When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this operation to override the default bossCount from Netty 1 int bossPool (consumer) To use a explicit org.jboss.netty.channel.socket.nio.BossPool as the boss thread pool. For example to share a thread pool with multiple consumers. By default each consumer has their own boss pool with 1 core thread. BossPool channelGroup (consumer) To use a explicit ChannelGroup. ChannelGroup chunkedMaxContentLength (consumer) Value in bytes the max content length per chunked frame received on the Netty HTTP server. 1048576 int compression (consumer) Allow using gzip/deflate for compression on the Netty HTTP server if the client supports it from the HTTP headers. false boolean disableStreamCache (consumer) Determines whether or not the raw input stream from Netty HttpRequest#getContent() is cached or not (Camel will read the stream into a in light-weight memory based Stream caching) cache. By default Camel will cache the Netty input stream to support reading it multiple times to ensure it Camel can retrieve all data from the stream. However you can set this option to true when you for example need to access the raw stream, such as streaming it directly to a file or other persistent store. Mind that if you enable this option, then you cannot read the Netty stream multiple times out of the box, and you would need manually to reset the reader index on the Netty raw stream. false boolean disconnectOnNoReply (consumer) If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back. true boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern httpMethodRestrict (consumer) To disable HTTP methods on the Netty HTTP consumer. You can specify multiple separated by comma. String mapHeaders (consumer) If this option is enabled, then during binding from Netty to Camel Message then the headers will be mapped as well (eg added as header to the Camel Message as well). You can turn off this option to disable this. The headers can still be accessed from the org.apache.camel.component.netty.http.NettyHttpMessage message with the method getHttpRequest() that returns the Netty HTTP request org.jboss.netty.handler.codec.http.HttpRequest instance. true boolean maxChannelMemorySize (consumer) The maximum total size of the queued events per channel when using orderedThreadPoolExecutor. Specify 0 to disable. 10485760 long maxHeaderSize (consumer) The maximum length of all headers. If the sum of the length of each header exceeds this value, a TooLongFrameException will be raised. 8192 int maxTotalMemorySize (consumer) The maximum total size of the queued events for this pool when using orderedThreadPoolExecutor. Specify 0 to disable. 209715200 long nettyServerBootstrapFactory (consumer) To use a custom NettyServerBootstrapFactory NettyServerBootstrap Factory nettySharedHttpServer (consumer) To use a shared Netty HTTP server. See Netty HTTP Server Example for more details. NettySharedHttpServer noReplyLogLevel (consumer) If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back. WARN LoggingLevel orderedThreadPoolExecutor (consumer) Whether to use ordered thread pool, to ensure events are processed orderly on the same channel. See details at the netty javadoc of org.jboss.netty.handler.execution.OrderedMemoryAwareThreadPoolExecutor for more details. true boolean serverClosedChannel ExceptionCaughtLogLevel (consumer) If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server. DEBUG LoggingLevel serverExceptionCaughtLog Level (consumer) If the server (NettyConsumer) catches an exception then its logged using this logging level. WARN LoggingLevel serverPipelineFactory (consumer) To use a custom ServerPipelineFactory ServerPipelineFactory traceEnabled (consumer) Specifies whether to enable HTTP TRACE for this Netty HTTP consumer. By default TRACE is turned off. false boolean urlDecodeHeaders (consumer) If this option is enabled, then during binding from Netty to Camel Message then the header values will be URL decoded (eg %20 will be a space character. Notice this option is used by the default org.apache.camel.component.netty.http.NettyHttpBinding and therefore if you implement a custom org.apache.camel.component.netty.http.NettyHttpBinding then you would need to decode the headers accordingly to this option. false boolean workerCount (consumer) When netty works on nio mode, it uses default workerCount parameter from Netty, which is cpu_core_threads2. User can use this operation to override the default workerCount from Netty int workerPool (consumer) To use a explicit org.jboss.netty.channel.socket.nio.WorkerPool as the worker thread pool. For example to share a thread pool with multiple consumers. By default each consumer has their own worker pool with 2 x cpu count core threads. WorkerPool connectTimeout (producer) Time to wait for a socket connection to be available. Value is in millis. 10000 long requestTimeout (producer) Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty's ReadTimeoutHandler to trigger the timeout. long throwExceptionOnFailure (producer) Option to disable throwing the HttpOperationFailedException in case of failed responses from the remote server. This allows you to get all responses regardless of the HTTP status code. true boolean clientPipelineFactory (producer) To use a custom ClientPipelineFactory ClientPipelineFactory lazyChannelCreation (producer) Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started. true boolean okStatusCodeRange (producer) The status codes which are considered a success response. The values are inclusive. Multiple ranges can be defined, separated by comma, e.g. 200-204,209,301-304. Each range must be a single number or from-to with the dash included. The default range is 200-299 200-299 String producerPoolEnabled (producer) Whether producer pool is enabled or not. Important: Do not turn this off, as the pooling is needed for handling concurrency and reliable request/reply. true boolean producerPoolMaxActive (producer) Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit. -1 int producerPoolMaxIdle (producer) Sets the cap on the number of idle instances in the pool. 100 int producerPoolMinEvictable Idle (producer) Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor. 300000 long producerPoolMinIdle (producer) Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects. int useChannelBuffer (producer) If the useChannelBuffer is true, netty producer will turn the message body into ChannelBuffer before sending it out. false boolean useRelativePath (producer) Sets whether to use a relative path in HTTP requests. Some third party backend systems such as IBM Datapower do not support absolute URIs in HTTP POSTs, and setting this option to true can work around this problem. false boolean bootstrapConfiguration (advanced) To use a custom configured NettyServerBootstrapConfiguration for configuring this endpoint. NettyServerBootstrap Configuration configuration (advanced) To use a custom configured NettyHttpConfiguration for configuring this endpoint. NettyHttpConfiguration headerFilterStrategy (advanced) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter headers. HeaderFilterStrategy nettyHttpBinding (advanced) To use a custom org.apache.camel.component.netty.http.NettyHttpBinding for binding to/from Netty and Camel Message API. NettyHttpBinding options (advanced) Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used. Map receiveBufferSize (advanced) The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes. 65536 long receiveBufferSizePredictor (advanced) Configures the buffer size predictor. See details at Jetty documentation and this mail thread. int sendBufferSize (advanced) The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes. 65536 long synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean transferException (advanced) If enabled and an Exchange failed processing on the consumer side, and if the caused Exception was send back serialized in the response as a application/x-java-serialized-object content type. On the producer side the exception will be deserialized and thrown as is, instead of the HttpOperationFailedException. The caused exception is required to be serialized. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false boolean transferExchange (advanced) Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false boolean decoder (codec) Deprecated To use a single decoder. This options is deprecated use encoders instead. ChannelHandler decoders (codec) A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. String encoder (codec) Deprecated To use a single encoder. This options is deprecated use encoders instead. ChannelHandler encoders (codec) A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. String enabledProtocols (security) Which protocols to enable when using SSL TLSv1,TLSv1.1,TLSv1.2 String keyStoreFile (security) Client side certificate keystore to be used for encryption File keyStoreFormat (security) Keystore format to be used for payload encryption. Defaults to JKS if not set JKS String keyStoreResource (security) Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String needClientAuth (security) Configures whether the server needs client authentication when using SSL. false boolean passphrase (security) Password setting to use in order to encrypt/decrypt payloads sent using SSH String securityConfiguration (security) Refers to a org.apache.camel.component.netty.http.NettyHttpSecurityConfiguration for configuring secure web resources. NettyHttpSecurity Configuration securityOptions (security) To configure NettyHttpSecurityConfiguration using key/value pairs from the map Map securityProvider (security) Security provider to be used for payload encryption. Defaults to SunX509 if not set. SunX509 String ssl (security) Setting to specify whether SSL encryption is applied to this endpoint false boolean sslClientCertHeaders (security) When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range. false boolean sslContextParameters (security) To configure security using SSLContextParameters SSLContextParameters sslHandler (security) Reference to a class that could be used to return an SSL Handler SslHandler trustStoreFile (security) Server side certificate keystore to be used for encryption File trustStoreResource (security) Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String 241.3. Spring Boot Auto-Configuration The component supports 31 options, which are listed below. Name Description Default Type camel.component.netty-http.configuration.allow-default-codec Boolean camel.component.netty-http.configuration.bridge-endpoint If the option is true, the producer will ignore the Exchange.HTTP_URI header, and use the endpoint's URI for request. You may also set the throwExceptionOnFailure to be false to let the producer send all the fault response back. The consumer working in the bridge mode will skip the gzip compression and WWW URL form encoding (by adding the Exchange.SKIP_GZIP_ENCODING and Exchange.SKIP_WWW_FORM_URLENCODED headers to the consumed exchange). false Boolean camel.component.netty-http.configuration.chunked-max-content-length Value in bytes the max content length per chunked frame received on the Netty HTTP server. 1048576 Integer camel.component.netty-http.configuration.compression Allow using gzip/deflate for compression on the Netty HTTP server if the client supports it from the HTTP headers. false Boolean camel.component.netty-http.configuration.disable-stream-cache Determines whether or not the raw input stream from Netty HttpRequest#getContent() is cached or not (Camel will read the stream into a in light-weight memory based Stream caching) cache. By default Camel will cache the Netty input stream to support reading it multiple times to ensure it Camel can retrieve all data from the stream. However you can set this option to true when you for example need to access the raw stream, such as streaming it directly to a file or other persistent store. Mind that if you enable this option, then you cannot read the Netty stream multiple times out of the box, and you would need manually to reset the reader index on the Netty raw stream. false Boolean camel.component.netty-http.configuration.host The local hostname such as localhost, or 0.0.0.0 when being a consumer. The remote HTTP server hostname when using producer. String camel.component.netty-http.configuration.map-headers If this option is enabled, then during binding from Netty to Camel Message then the headers will be mapped as well (eg added as header to the Camel Message as well). You can turn off this option to disable this. The headers can still be accessed from the org.apache.camel.component.netty.http.NettyHttpMessage message with the method getHttpRequest() that returns the Netty HTTP request org.jboss.netty.handler.codec.http.HttpRequest instance. true Boolean camel.component.netty-http.configuration.match-on-uri-prefix Whether or not Camel should try to find a target consumer by matching the URI prefix if no exact match is found. false Boolean camel.component.netty-http.configuration.max-header-size The maximum length of all headers. If the sum of the length of each header exceeds this value, a TooLongFrameException will be raised. 8192 Integer camel.component.netty-http.configuration.ok-status-code-range The status codes which are considered a success response. The values are inclusive. Multiple ranges can be defined, separated by comma, e.g. 200-204,209,301-304. Each range must be a single number or from-to with the dash included. The default range is 200-299 200-299 String camel.component.netty-http.configuration.path Resource path String camel.component.netty-http.configuration.port The port number. Is default 80 for http and 443 for https. Integer camel.component.netty-http.configuration.protocol The protocol to use which is either http or https String camel.component.netty-http.configuration.send503when-suspended Whether to send back HTTP status code 503 when the consumer has been suspended. If the option is false then the Netty Acceptor is unbound when the consumer is suspended, so clients cannot connect anymore. true Boolean camel.component.netty-http.configuration.throw-exception-on-failure Option to disable throwing the HttpOperationFailedException in case of failed responses from the remote server. This allows you to get all responses regardless of the HTTP status code. true Boolean camel.component.netty-http.configuration.transfer-exception If enabled and an Exchange failed processing on the consumer side, and if the caused Exception was send back serialized in the response as a application/x-java-serialized-object content type. On the producer side the exception will be deserialized and thrown as is, instead of the HttpOperationFailedException. The caused exception is required to be serialized. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false Boolean camel.component.netty-http.configuration.url-decode-headers If this option is enabled, then during binding from Netty to Camel Message then the header values will be URL decoded (eg %20 will be a space character. Notice this option is used by the default org.apache.camel.component.netty.http.NettyHttpBinding and therefore if you implement a custom org.apache.camel.component.netty.http.NettyHttpBinding then you would need to decode the headers accordingly to this option. false Boolean camel.component.netty-http.configuration.use-relative-path Sets whether to use a relative path in HTTP requests. Some third party backend systems such as IBM Datapower do not support absolute URIs in HTTP POSTs, and setting this option to true can work around this problem. false Boolean camel.component.netty-http.enabled Enable netty-http component true Boolean camel.component.netty-http.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter headers. The option is a org.apache.camel.spi.HeaderFilterStrategy type. String camel.component.netty-http.maximum-pool-size The core pool size for the ordered thread pool, if its in use. The default value is 16. 16 Integer camel.component.netty-http.netty-http-binding To use a custom org.apache.camel.component.netty.http.NettyHttpBinding for binding to/from Netty and Camel Message API. The option is a org.apache.camel.component.netty.http.NettyHttpBinding type. String camel.component.netty-http.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.netty-http.security-configuration.authenticate Whether to enable authentication <p/> This is by default enabled. Boolean camel.component.netty-http.security-configuration.constraint The supported restricted. <p/> Currently only Basic is supported. String camel.component.netty-http.security-configuration.login-denied-logging-level Sets a logging level to use for logging denied login attempts (incl stacktraces) <p/> This level is by default DEBUG. LoggingLevel camel.component.netty-http.security-configuration.realm Sets the name of the realm to use. String camel.component.netty-http.security-configuration.role-class-name String camel.component.netty-http.security-configuration.security-authenticator Sets the {@link SecurityAuthenticator} to use for authenticating the {@link HttpPrincipal} . SecurityAuthenticator camel.component.netty-http.security-configuration.security-constraint Sets a {@link SecurityConstraint} to use for checking if a web resource is restricted or not <p/> By default this is <tt>null</tt>, which means all resources is restricted. SecurityConstraint camel.component.netty-http.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean 241.4. Message Headers The following headers can be used on the producer to control the HTTP request. Name Type Description CamelHttpMethod String Allow to control what HTTP method to use such as GET, POST, TRACE etc. The type can also be a org.jboss.netty.handler.codec.http.HttpMethod instance. CamelHttpQuery String Allows to provide URI query parameters as a String value that overrides the endpoint configuration. Separate multiple parameters using the & sign. For example: foo=bar&beer=yes . CamelHttpPath String Camel 2.13.1/2.12.4: Allows to provide URI context-path and query parameters as a String value that overrides the endpoint configuration. This allows to reuse the same producer for calling same remote http server, but using a dynamic context-path and query parameters. Content-Type String To set the content-type of the HTTP body. For example: text/plain; charset="UTF-8" . CamelHttpResponseCode int Allows to set the HTTP Status code to use. By default 200 is used for success, and 500 for failure. The following headers is provided as meta-data when a route starts from an Netty HTTP endpoint: The description in the table takes offset in a route having: from("netty-http:http:0.0.0.0:8080/myapp")... Name Type Description CamelHttpMethod String The HTTP method used, such as GET, POST, TRACE etc. CamelHttpUrl String The URL including protocol, host and port, etc CamelHttpUri String The URI without protocol, host and port, etc CamelHttpQuery String Any query parameters, such as foo=bar&beer=yes CamelHttpRawQuery String Camel 2.13.0 : Any query parameters, such as foo=bar&beer=yes . Stored in the raw form, as they arrived to the consumer (i.e. before URL decoding). CamelHttpPath String Additional context-path. This value is empty if the client called the context-path /myapp . If the client calls /myapp/mystuff , then this header value is /mystuff . In other words its the value after the context-path configured on the route endpoint. CamelHttpCharacterEncoding String The charset from the content-type header. CamelHttpAuthentication String If the user was authenticated using HTTP Basic then this header is added with the value Basic . Content-Type String The content type if provided. For example: text/plain; charset="UTF-8" . 241.5. Access to Netty types This component uses the org.apache.camel.component.netty.http.NettyHttpMessage as the message implementation on the Exchange. This allows end users to get access to the original Netty request/response instances if needed, as shown below. Mind that the original response may not be accessible at all times. org.jboss.netty.handler.codec.http.HttpRequest request = exchange.getIn(NettyHttpMessage.class).getHttpRequest(); 241.6. Examples In the route below we use Netty HTTP as a HTTP server, which returns back a hardcoded "Bye World" message. from("netty-http:http://0.0.0.0:8080/foo") .transform().constant("Bye World"); And we can call this HTTP server using Camel also, with the ProducerTemplate as shown below: String out = template.requestBody("netty-http:http://0.0.0.0:8080/foo", "Hello World", String.class); System.out.println(out); And we get back "Bye World" as the output. 241.7. How do I let Netty match wildcards By default Netty HTTP will only match on exact uri's. But you can instruct Netty to match prefixes. For example from("netty-http:http://0.0.0.0:8123/foo").to("mock:foo"); In the route above Netty HTTP will only match if the uri is an exact match, so it will match if you enter http://0.0.0.0:8123/foo but not match if you do http://0.0.0.0:8123/foo/bar . So if you want to enable wildcard matching you do as follows: from("netty-http:http://0.0.0.0:8123/foo?matchOnUriPrefix=true").to("mock:foo"); So now Netty matches any endpoints with starts with foo . To match any endpoint you can do: from("netty-http:http://0.0.0.0:8123?matchOnUriPrefix=true").to("mock:foo"); 241.8. Using multiple routes with same port In the same CamelContext you can have multiple routes from Netty HTTP that shares the same port (eg a org.jboss.netty.bootstrap.ServerBootstrap instance). Doing this requires a number of bootstrap options to be identical in the routes, as the routes will share the same org.jboss.netty.bootstrap.ServerBootstrap instance. The instance will be configured with the options from the first route created. The options the routes must be identical configured is all the options defined in the org.apache.camel.component.netty.NettyServerBootstrapConfiguration configuration class. If you have configured another route with different options, Camel will throw an exception on startup, indicating the options is not identical. To mitigate this ensure all options is identical. Here is an example with two routes that share the same port. Two routes sharing the same port from("netty-http:http://0.0.0.0:{{port}}/foo") .to("mock:foo") .transform().constant("Bye World"); from("netty-http:http://0.0.0.0:{{port}}/bar") .to("mock:bar") .transform().constant("Bye Camel"); And here is an example of a mis configured 2nd route that do not have identical org.apache.camel.component.netty.NettyServerBootstrapConfiguration option as the 1st route. This will cause Camel to fail on startup. Two routes sharing the same port, but the 2nd route is misconfigured and will fail on starting from("netty-http:http://0.0.0.0:{{port}}/foo") .to("mock:foo") .transform().constant("Bye World"); // we cannot have a 2nd route on same port with SSL enabled, when the 1st route is NOT from("netty-http:http://0.0.0.0:{{port}}/bar?ssl=true") .to("mock:bar") .transform().constant("Bye Camel"); 241.8.1. Reusing same server bootstrap configuration with multiple routes By configuring the common server bootstrap option in an single instance of a org.apache.camel.component.netty.NettyServerBootstrapConfiguration type, we can use the bootstrapConfiguration option on the Netty HTTP consumers to refer and reuse the same options across all consumers. <bean id="nettyHttpBootstrapOptions" class="org.apache.camel.component.netty.NettyServerBootstrapConfiguration"> <property name="backlog" value="200"/> <property name="connectTimeout" value="20000"/> <property name="workerCount" value="16"/> </bean> And in the routes you refer to this option as shown below <route> <from uri="netty-http:http://0.0.0.0:{{port}}/foo?bootstrapConfiguration=#nettyHttpBootstrapOptions"/> ... </route> <route> <from uri="netty-http:http://0.0.0.0:{{port}}/bar?bootstrapConfiguration=#nettyHttpBootstrapOptions"/> ... </route> <route> <from uri="netty-http:http://0.0.0.0:{{port}}/beer?bootstrapConfiguration=#nettyHttpBootstrapOptions"/> ... </route> 241.8.2. Reusing same server bootstrap configuration with multiple routes across multiple bundles in OSGi container See the Netty HTTP Server Example for more details and example how to do that. 241.9. Using HTTP Basic Authentication The Netty HTTP consumer supports HTTP basic authentication by specifying the security realm name to use, as shown below <route> <from uri="netty-http:http://0.0.0.0:{{port}}/foo?securityConfiguration.realm=karaf"/> ... </route> The realm name is mandatory to enable basic authentication. By default the JAAS based authenticator is used, which will use the realm name specified (karaf in the example above) and use the JAAS realm and the JAAS \{{LoginModule}}s of this realm for authentication. End user of Apache Karaf / ServiceMix has a karaf realm out of the box, and hence why the example above would work out of the box in these containers. 241.9.1. Specifying ACL on web resources The org.apache.camel.component.netty.http.SecurityConstraint allows to define constrains on web resources. And the org.apache.camel.component.netty.http.SecurityConstraintMapping is provided out of the box, allowing to easily define inclusions and exclusions with roles. For example as shown below in the XML DSL, we define the constraint bean: <bean id="constraint" class="org.apache.camel.component.netty.http.SecurityConstraintMapping"> <!-- inclusions defines url -> roles restrictions --> <!-- a * should be used for any role accepted (or even no roles) --> <property name="inclusions"> <map> <entry key="/*" value="*"/> <entry key="/admin/*" value="admin"/> <entry key="/guest/*" value="admin,guest"/> </map> </property> <!-- exclusions is used to define public urls, which requires no authentication --> <property name="exclusions"> <set> <value>/public/*</value> </set> </property> </bean> The constraint above is define so that access to /* is restricted and any roles is accepted (also if user has no roles) access to /admin/* requires the admin role access to /guest/* requires the admin or guest role access to /public/* is an exclusion which means no authentication is needed, and is therefore public for everyone without logging in To use this constraint we just need to refer to the bean id as shown below: <route> <from uri="netty-http:http://0.0.0.0:{{port}}/foo?matchOnUriPrefix=true&securityConfiguration.realm=karaf&securityConfiguration.securityConstraint=#constraint"/> ... </route> 241.10. See Also Configuring Camel Component Endpoint Getting Started Netty Netty HTTP Server Example Jetty | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-netty-http</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"netty-http:http://0.0.0.0:8080[?options]",
"netty-http:protocol:host:port/path",
"org.jboss.netty.handler.codec.http.HttpRequest request = exchange.getIn(NettyHttpMessage.class).getHttpRequest();",
"from(\"netty-http:http://0.0.0.0:8080/foo\") .transform().constant(\"Bye World\");",
"String out = template.requestBody(\"netty-http:http://0.0.0.0:8080/foo\", \"Hello World\", String.class); System.out.println(out);",
"from(\"netty-http:http://0.0.0.0:8123/foo\").to(\"mock:foo\");",
"from(\"netty-http:http://0.0.0.0:8123/foo?matchOnUriPrefix=true\").to(\"mock:foo\");",
"from(\"netty-http:http://0.0.0.0:8123?matchOnUriPrefix=true\").to(\"mock:foo\");",
"from(\"netty-http:http://0.0.0.0:{{port}}/foo\") .to(\"mock:foo\") .transform().constant(\"Bye World\"); from(\"netty-http:http://0.0.0.0:{{port}}/bar\") .to(\"mock:bar\") .transform().constant(\"Bye Camel\");",
"from(\"netty-http:http://0.0.0.0:{{port}}/foo\") .to(\"mock:foo\") .transform().constant(\"Bye World\"); // we cannot have a 2nd route on same port with SSL enabled, when the 1st route is NOT from(\"netty-http:http://0.0.0.0:{{port}}/bar?ssl=true\") .to(\"mock:bar\") .transform().constant(\"Bye Camel\");",
"<bean id=\"nettyHttpBootstrapOptions\" class=\"org.apache.camel.component.netty.NettyServerBootstrapConfiguration\"> <property name=\"backlog\" value=\"200\"/> <property name=\"connectTimeout\" value=\"20000\"/> <property name=\"workerCount\" value=\"16\"/> </bean>",
"<route> <from uri=\"netty-http:http://0.0.0.0:{{port}}/foo?bootstrapConfiguration=#nettyHttpBootstrapOptions\"/> </route> <route> <from uri=\"netty-http:http://0.0.0.0:{{port}}/bar?bootstrapConfiguration=#nettyHttpBootstrapOptions\"/> </route> <route> <from uri=\"netty-http:http://0.0.0.0:{{port}}/beer?bootstrapConfiguration=#nettyHttpBootstrapOptions\"/> </route>",
"<route> <from uri=\"netty-http:http://0.0.0.0:{{port}}/foo?securityConfiguration.realm=karaf\"/> </route>",
"<bean id=\"constraint\" class=\"org.apache.camel.component.netty.http.SecurityConstraintMapping\"> <!-- inclusions defines url -> roles restrictions --> <!-- a * should be used for any role accepted (or even no roles) --> <property name=\"inclusions\"> <map> <entry key=\"/*\" value=\"*\"/> <entry key=\"/admin/*\" value=\"admin\"/> <entry key=\"/guest/*\" value=\"admin,guest\"/> </map> </property> <!-- exclusions is used to define public urls, which requires no authentication --> <property name=\"exclusions\"> <set> <value>/public/*</value> </set> </property> </bean>",
"<route> <from uri=\"netty-http:http://0.0.0.0:{{port}}/foo?matchOnUriPrefix=true&securityConfiguration.realm=karaf&securityConfiguration.securityConstraint=#constraint\"/> </route>"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/netty-http-component |
Chapter 4. Important links | Chapter 4. Important links Red Hat AMQ Supported Configurations Red Hat AMQ Component Details | null | https://docs.redhat.com/en/documentation/red_hat_amq_clients/2023.q4/html/amq_clients_overview/important_links |
Chapter 1. Metadata APIs | Chapter 1. Metadata APIs 1.1. APIRequestCount [apiserver.openshift.io/v1] Description APIRequestCount tracks requests made to an API. The instance name must be of the form resource.version.group , matching the resource. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. Binding [v1] Description Binding ties one object to another; for example, a pod is bound to a node by a scheduler. Deprecated in 1.7, please use the bindings subresource of pods instead. Type object 1.3. ComponentStatus [v1] Description ComponentStatus (and ComponentStatusList) holds the cluster validation info. Deprecated: This API is deprecated in v1.19+ Type object 1.4. ConfigMap [v1] Description ConfigMap holds configuration data for pods to consume. Type object 1.5. ControllerRevision [apps/v1] Description ControllerRevision implements an immutable snapshot of state data. Clients are responsible for serializing and deserializing the objects that contain their internal state. Once a ControllerRevision has been successfully created, it can not be updated. The API Server will fail validation of all requests that attempt to mutate the Data field. ControllerRevisions may, however, be deleted. Note that, due to its use by both the DaemonSet and StatefulSet controllers for update and rollback, this object is beta. However, it may be subject to name and representation changes in future releases, and clients should not depend on its stability. It is primarily for internal use by controllers. Type object 1.6. Event [events.k8s.io/v1] Description Event is a report of an event somewhere in the cluster. It generally denotes some state change in the system. Events have a limited retention time and triggers and messages may evolve with time. Event consumers should not rely on the timing of an event with a given Reason reflecting a consistent underlying trigger, or the continued existence of events with that Reason. Events should be treated as informative, best-effort, supplemental data. Type object 1.7. Event [v1] Description Event is a report of an event somewhere in the cluster. Events have a limited retention time and triggers and messages may evolve with time. Event consumers should not rely on the timing of an event with a given Reason reflecting a consistent underlying trigger, or the continued existence of events with that Reason. Events should be treated as informative, best-effort, supplemental data. Type object 1.8. Lease [coordination.k8s.io/v1] Description Lease defines a lease concept. Type object 1.9. Namespace [v1] Description Namespace provides a scope for Names. Use of multiple namespaces is optional. Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/metadata_apis/metadata-apis |
Chapter 21. Time Parameters | Chapter 21. Time Parameters You can modify the time synchronization service with time parameters. Parameter Description ChronyAclRules Access Control List of NTP clients. By default no clients are permitted. The default value is ['deny all'] . ChronyGlobalPoolOptions Default pool options for the configured NTP pools in chrony.conf. If this is specified, NtpIburstEnable, MaxPoll, and MinPoll are ignored. ChronyGlobalServerOptions Default server options for the configured NTP servers in chrony.conf. If this is specified, NtpIburstEnable, MaxPoll, and MinPoll are ignored. EnablePackageInstall Set to true to enable package installation at deploy time. The default value is false . MaxPoll Specify maximum poll interval of upstream servers for NTP messages, in seconds to the power of two. Allowed values are 4 to 17. The default value is 10 . MinPoll Specify minimum poll interval of upstream servers for NTP messages, in seconds to the power of two. The minimum poll interval defaults to 6 (64 s). Allowed values are 4 to 17. The default value is 6 . NtpIburstEnable Specifies whether to enable the iburst option for every NTP peer. If iburst is enabled, when the NTP server is unreachable NTP will send a burst of eight packages instead of one. This is designed to speed up the initial syncrhonization. The default value is true . NtpPool NTP pool list. Defaults to [], so only NtpServer is used by default. NtpServer NTP servers list. The default value is ['0.pool.ntp.org', '1.pool.ntp.org', '2.pool.ntp.org', '3.pool.ntp.org'] . TimeZone The timezone to be set on the overcloud. The default value is UTC . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/overcloud_parameters/ref_time-parameters_overcloud_parameters |
Chapter 6. Migrating custom providers | Chapter 6. Migrating custom providers Similarly to the Red Hat Single Sign-On 7.6, custom providers are deployed to the Red Hat build of Keycloak by copying them to a deployment directory. In the Red Hat build of Keycloak, copy your providers to the providers directory instead of standalone/deployments , which no longer exists. Additional dependencies should also be copied to the providers directory. Red Hat build of Keycloak does not use a separate classpath for custom providers, so you may need to be more careful with additional dependencies that you include. In addition, the EAR and WAR packaging formats, and jboss-deployment-structure.xml files, are no longer supported. While Red Hat Single Sign-On 7.6 automatically discovered custom providers, and even supported the ability to hot-deploy custom providers while Keycloak is running, this behavior is no longer supported. Also, after you make a change to the providers or dependencies in the providers directory, you have to do a build or restart the server with the auto build feature. Depending on what APIs your providers use you may also need to make some changes to the providers. See the following sections for details. 6.1. Transition from Java EE to Jakarta EE Keycloak migrated its codebase from Java EE (Enterprise Edition) to Jakarta EE, which brought various changes. We have upgraded all Jakarta EE specifications in order to support Jakarta EE 10, such as: Jakarta Persistence 3.1 Jakarta RESTful Web Services 3.1 Jakarta Mail API 2.1 Jakarta Servlet 6.0 Jakarta Activation 2.1 Jakarta EE 10 provides a modernized, simplified, lightweight approach to building cloud-native Java applications. The main changes provided within this initiative are changing the namespace from javax.* to jakarta.* . This change does not apply for javax.* packages provided directly in the JDK, such as javax.security , javax.net , javax.crypto , etc. In addition, Jakarta EE APIs like session/stateless beans are no longer supported. 6.2. Removed third party dependencies Some dependencies were removed in Red Hat build of Keycloak including openshift-rest-client okio-jvm okhttp commons-lang commons-compress jboss-dmr kotlin-stdlib Also, since Red Hat build of Keycloak is no longer based on EAP, most of the EAP dependencies were removed. This change means that if you use any of these libraries as dependencies of your own providers deployed to the Red Hat build of Keycloak, you may also need to copy those JAR files explicitly to the Keycloak distribution providers directory. 6.3. Context and dependency injection are no longer enabled for JAX-RS Resources To provide a better runtime and leverage as much as possible the underlying stack, all injection points for contextual data using the javax.ws.rs.core.Context annotation were removed. The expected improvement in performance involves no longer creating proxies instances multiple times during the request lifecycle, and drastically reducing the amount of reflection code at runtime. If you need access to the current request and response objects, you can now obtain their instances directly from the KeycloakSession : @Context org.jboss.resteasy.spi.HttpRequest request; @Context org.jboss.resteasy.spi.HttpResponse response; was replaced by: KeycloakSession session = // obtain the session, which is usually available when creating a custom provider from a factory KeycloakContext context = session.getContext(); HttpRequest request = context.getHttpRequest(); HttpResponse response = context.getHttpResponse(); Additional contextual data can be obtained from the runtime through the KeycloakContext instance: KeycloakSession session = // obtain the session KeycloakContext context = session.getContext(); MyContextualObject myContextualObject = context.getContextObject(MyContextualObject.class); 6.4. Deprecated methods from data providers and models Some previously deprecated methods are now removed in Red Hat build of Keycloak: RealmModel#searchForGroupByNameStream(String, Integer, Integer) UserProvider#getUsersStream(RealmModel, boolean) UserSessionPersisterProvider#loadUserSessions(int, int, boolean, int, String) Interfaces added for Streamification work, such as RoleMapperModel.Streams and similar KeycloakModelUtils#getClientScopeMappings Deprecated methods from KeycloakSession UserQueryProvider#getUsersStream methods Also, these other changes were made: Some methods from UserSessionProvider were moved to UserLoginFailureProvider . Streams interfaces in federated storage provider classes were deprecated. Streamification - interfaces now contain only Stream-based methods. For example in GroupProvider interface @Deprecated List<GroupModel> getGroups(RealmModel realm); was replaced by Stream<GroupModel> getGroupsStream(RealmModel realm); Consistent parameter ordering - methods now have strict parameter ordering where RealmModel is always the first parameter. For example in UserLookupProvider interface: @Deprecated UserModel getUserById(String id, RealmModel realm); was replaced by UserModel getUserById(RealmModel realm, String id) 6.4.1. List of changed interfaces ( o.k. stands for org.keycloak. package) server-spi module o.k.credential.CredentialInputUpdater o.k.credential.UserCredentialStore o.k.models.ClientProvider o.k.models.ClientSessionContext o.k.models.GroupModel o.k.models.GroupProvider o.k.models.KeyManager o.k.models.KeycloakSessionFactory o.k.models.ProtocolMapperContainerModel o.k.models.RealmModel o.k.models.RealmProvider o.k.models.RoleContainerModel o.k.models.RoleMapperModel o.k.models.RoleModel o.k.models.RoleProvider o.k.models.ScopeContainerModel o.k.models.UserCredentialManager o.k.models.UserModel o.k.models.UserProvider o.k.models.UserSessionProvider o.k.models.utils.RoleUtils o.k.sessions.AuthenticationSessionProvider o.k.storage.client.ClientLookupProvider o.k.storage.group.GroupLookupProvider o.k.storage.user.UserLookupProvider o.k.storage.user.UserQueryProvider server-spi-private module o.k.events.EventQuery o.k.events.admin.AdminEventQuery o.k.keys.KeyProvider 6.4.2. Refactorings in the storage layer Red Hat build of Keycloak undergoes a large refactoring to simplify the API usage, which impacts existing code. Some of these changes require updates to existing code. The following sections provide more detail. 6.4.2.1. Changes in the module structure Several public APIs around storage functionality in KeycloakSession have been consolidated, and some have been moved, deprecated, or removed. Three new modules have been introduced, and data-oriented code from server-spi , server-spi-private , and services modules have been moved there: org.keycloak:keycloak-model-legacy Contains all public facing APIs from the legacy store, such as the User Storage API. org.keycloak:keycloak-model-legacy-private Contains private implementations that relate to user storage management, such as storage *Manager classes. org.keycloak:keycloak-model-legacy-services Contains all REST endpoints that directly operate on the legacy store. If you are using for example in your custom user storage provider implementation the classes which have been moved to the new modules, you need to update your dependencies to include the new modules listed above. 6.4.2.2. Changes in KeycloakSession KeycloakSession has been simplified. Several methods have been removed in KeycloakSession . KeycloakSession session contained several methods for obtaining a provider for a particular object type, such as for a UserProvider there are users() , userLocalStorage() , userCache() , userStorageManager() , and userFederatedStorage() . This situation may be confusing for the developer who has to understand the exact meaning of each method. For those reasons, only the users() method is kept in KeycloakSession , and should replace all other calls listed above. The rest of the methods have been removed. The same pattern of depreciation applies to methods of other object areas, such as clients() or groups() . All methods ending in *StorageManager() and *LocalStorage() have been removed. The section describes how to migrate those calls to the new API or use the legacy API. 6.4.3. Migrating existing providers The existing providers need no migration if they do not call a removed method, which should be the case for most providers. If the provider uses removed methods, but does not rely on local versus non-local storage, changing a call from the now removed userLocalStorage() to the method users() is the best option. Be aware that the semantics change here as the new method involves a cache if that has been enabled in the local setup. Before migration: accessing a removed API doesn't compile session .userLocalStorage() ; After migration: accessing the new API when caller does not depend on the legacy storage API session .users() ; In the rare case when a custom provider needs to distinguish between the mode of a particular provider, access to the deprecated objects is provided by using the LegacyStoreManagers data store provider. This might be the case if the provider accesses the local storage directly or wants to skip the cache. This option will be available only if the legacy modules are part of the deployment. Before migration: accessing a removed API session .userLocalStorage() ; After migration: accessing the new functionality via the LegacyStoreManagers API ((LegacyDatastoreProvider) session.getProvider(DatastoreProvider.class)) .userLocalStorage() ; Some user storage related APIs have been wrapped in org.keycloak.storage.UserStorageUtil for convenience. 6.4.4. Changes to RealmModel The methods getUserStorageProviders , getUserStorageProvidersStream , getClientStorageProviders , getClientStorageProvidersStream , getRoleStorageProviders and getRoleStorageProvidersStream have been removed. Code which depends on these methods should cast the instance as follows: Before migration: code will not compile due to the changed API realm .getClientStorageProvidersStream() ...; After migration: cast the instance to the legacy interface ((LegacyRealmModel) realm) .getClientStorageProvidersStream() ...; Similarly, code that used to implement the interface RealmModel and wants to provide these methods should implement the new interface LegacyRealmModel . This interface is a sub-interface of RealmModel and includes the old methods: Before migration: code implements the old interface public class MyClass extends RealmModel { /* might not compile due to @Override annotations for methods no longer present in the interface RealmModel. / / ... */ } After migration: code implements the new interface public class MyClass extends LegacyRealmModel { /* ... */ } 6.4.5. Interface UserCache moved to the legacy module As the caching status of objects will be transparent to services, the interface UserCache has been moved to the module keycloak-model-legacy . Code that depends on the legacy implementation should access the UserCache directly. Before migration: code will not compile[source,java,subs="+quotes"] After migration: use the API directly UserStorageUitl.userCache(session); To trigger the invalidation of a realm, instead of using the UserCache API, consider triggering an event: Before migration: code uses cache API[source,java,subs="+quotes"] After migration: use the invalidation API session.invalidate(InvalidationHandler.ObjectType.REALM, realm.getId()); 6.4.6. Credential management for users Credentials for users were previously managed using session.userCredentialManager(). method (realm, user, ...) . The new way is to leverage user.credentialManager(). method (...) . This form gets the credential functionality closer to the API of users, and does not rely on prior knowledge of the user credential's location in regard to realm and storage. The old APIs have been removed. Before migration: accessing a removed API session.userCredentialManager() .createCredential (realm, user, credentialModel) After migration: accessing the new API user.credentialManager() .createStoredCredential (credentialModel) For a custom UserStorageProvider , there is a new method credentialManager() that needs to be implemented when returning a UserModel . Those must return an instance of the LegacyUserCredentialManager : Before migration: code will not compile due to the new method credentialManager() required by UserModel public class MyUserStorageProvider implements UserLookupProvider, ... { /* ... */ protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapter(session, realm, model) { @Override public String getUsername() { return username; } }; } } After migration: implementation of the API UserModel.credentialManager() for the legacy store. public class MyUserStorageProvider implements UserLookupProvider, ... { /* ... */ protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapter(session, realm, model) { @Override public String getUsername() { return username; } @Override public SubjectCredentialManager credentialManager() { return new LegacyUserCredentialManager(session, realm, this); } }; } } | [
"@Context org.jboss.resteasy.spi.HttpRequest request; @Context org.jboss.resteasy.spi.HttpResponse response;",
"KeycloakSession session = // obtain the session, which is usually available when creating a custom provider from a factory KeycloakContext context = session.getContext(); HttpRequest request = context.getHttpRequest(); HttpResponse response = context.getHttpResponse();",
"KeycloakSession session = // obtain the session KeycloakContext context = session.getContext(); MyContextualObject myContextualObject = context.getContextObject(MyContextualObject.class);",
"@Deprecated List<GroupModel> getGroups(RealmModel realm);",
"Stream<GroupModel> getGroupsStream(RealmModel realm);",
"@Deprecated UserModel getUserById(String id, RealmModel realm);",
"UserModel getUserById(RealmModel realm, String id)",
"session .userLocalStorage() ;",
"session .users() ;",
"session .userLocalStorage() ;",
"((LegacyDatastoreProvider) session.getProvider(DatastoreProvider.class)) .userLocalStorage() ;",
"realm .getClientStorageProvidersStream() ...;",
"((LegacyRealmModel) realm) .getClientStorageProvidersStream() ...;",
"public class MyClass extends RealmModel { /* might not compile due to @Override annotations for methods no longer present in the interface RealmModel. / / ... */ }",
"public class MyClass extends LegacyRealmModel { /* ... */ }",
"session**.userCache()**.evict(realm, user);",
"UserStorageUitl.userCache(session);",
"UserCache cache = session.getProvider(UserCache.class); if (cache != null) cache.evict(realm)();",
"session.invalidate(InvalidationHandler.ObjectType.REALM, realm.getId());",
"session.userCredentialManager() .createCredential (realm, user, credentialModel)",
"user.credentialManager() .createStoredCredential (credentialModel)",
"public class MyUserStorageProvider implements UserLookupProvider, ... { /* ... */ protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapter(session, realm, model) { @Override public String getUsername() { return username; } }; } }",
"public class MyUserStorageProvider implements UserLookupProvider, ... { /* ... */ protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapter(session, realm, model) { @Override public String getUsername() { return username; } @Override public SubjectCredentialManager credentialManager() { return new LegacyUserCredentialManager(session, realm, this); } }; } }"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/migration_guide/migrating-providers |
Chapter 7. Upgrading a geo-replication deployment of standalone Red Hat Quay | Chapter 7. Upgrading a geo-replication deployment of standalone Red Hat Quay Use the following procedure to upgrade your geo-replication Red Hat Quay deployment. Important When upgrading geo-replication Red Hat Quay deployments to the y-stream release (for example, Red Hat Quay 3.7 Red Hat Quay 3.8), or geo-replication deployments, you must stop operations before upgrading. There is intermittent downtime down upgrading from one y-stream release to the . It is highly recommended to back up your Red Hat Quay deployment before upgrading. Prerequisites You have logged into registry.redhat.io Procedure This procedure assumes that you are running Red Hat Quay services on three (or more) systems. For more information, see Preparing for Red Hat Quay high availability . Obtain a list of all Red Hat Quay instances on each system running a Red Hat Quay instance. Enter the following command on System A to reveal the Red Hat Quay instances: USD sudo podman ps Example output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ec16ece208c0 registry.redhat.io/quay/quay-rhel8:v{producty-n1} registry 6 minutes ago Up 6 minutes ago 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp quay01 Enter the following command on System B to reveal the Red Hat Quay instances: USD sudo podman ps Example output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7ae0c9a8b37d registry.redhat.io/quay/quay-rhel8:v{producty-n1} registry 5 minutes ago Up 2 seconds ago 0.0.0.0:82->8080/tcp, 0.0.0.0:445->8443/tcp quay02 Enter the following command on System C to reveal the Red Hat Quay instances: USD sudo podman ps Example output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e75c4aebfee9 registry.redhat.io/quay/quay-rhel8:v{producty-n1} registry 4 seconds ago Up 4 seconds ago 0.0.0.0:84->8080/tcp, 0.0.0.0:447->8443/tcp quay03 Temporarily shut down all Red Hat Quay instances on each system. Enter the following command on System A to shut down the Red Hat Quay instance: USD sudo podman stop ec16ece208c0 Enter the following command on System B to shut down the Red Hat Quay instance: USD sudo podman stop 7ae0c9a8b37d Enter the following command on System C to shut down the Red Hat Quay instance: USD sudo podman stop e75c4aebfee9 Obtain the latest Red Hat Quay version, for example, Red Hat Quay 3, on each system. Enter the following command on System A to obtain the latest Red Hat Quay version: USD sudo podman pull registry.redhat.io/quay/quay-rhel8:{productminv} Enter the following command on System B to obtain the latest Red Hat Quay version: USD sudo podman pull registry.redhat.io/quay/quay-rhel8:v{producty} Enter the following command on System C to obtain the latest Red Hat Quay version: USD sudo podman pull registry.redhat.io/quay/quay-rhel8:{productminv} On System A of your highly available Red Hat Quay deployment, run the new image version, for example, Red Hat Quay 3: # sudo podman run --restart=always -p 443:8443 -p 80:8080 \ --sysctl net.core.somaxconn=4096 \ --name=quay01 \ -v /mnt/quay/config:/conf/stack:Z \ -v /mnt/quay/storage:/datastorage:Z \ -d registry.redhat.io/quay/quay-rhel8:{productminv} Wait for the new Red Hat Quay container to become fully operational on System A. You can check the status of the container by entering the following command: USD sudo podman ps Example output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 70b9f38c3fb4 registry.redhat.io/quay/quay-rhel8:v{producty} registry 2 seconds ago Up 2 seconds ago 0.0.0.0:82->8080/tcp, 0.0.0.0:445->8443/tcp quay01 Optional: Ensure that Red Hat Quay is fully operation by navigating to the Red Hat Quay UI. After ensuring that Red Hat Quay on System A is fully operational, run the new image versions on System B and on System C. On System B of your highly available Red Hat Quay deployment, run the new image version, for example, Red Hat Quay 3: # sudo podman run --restart=always -p 443:8443 -p 80:8080 \ --sysctl net.core.somaxconn=4096 \ --name=quay02 \ -v /mnt/quay/config:/conf/stack:Z \ -v /mnt/quay/storage:/datastorage:Z \ -d registry.redhat.io/quay/quay-rhel8:{productminv} On System C of your highly available Red Hat Quay deployment, run the new image version, for example, Red Hat Quay 3: # sudo podman run --restart=always -p 443:8443 -p 80:8080 \ --sysctl net.core.somaxconn=4096 \ --name=quay03 \ -v /mnt/quay/config:/conf/stack:Z \ -v /mnt/quay/storage:/datastorage:Z \ -d registry.redhat.io/quay/quay-rhel8:{productminv} You can check the status of the containers on System B and on System C by entering the following command: USD sudo podman ps | [
"sudo podman ps",
"CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ec16ece208c0 registry.redhat.io/quay/quay-rhel8:v{producty-n1} registry 6 minutes ago Up 6 minutes ago 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp quay01",
"sudo podman ps",
"CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7ae0c9a8b37d registry.redhat.io/quay/quay-rhel8:v{producty-n1} registry 5 minutes ago Up 2 seconds ago 0.0.0.0:82->8080/tcp, 0.0.0.0:445->8443/tcp quay02",
"sudo podman ps",
"CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e75c4aebfee9 registry.redhat.io/quay/quay-rhel8:v{producty-n1} registry 4 seconds ago Up 4 seconds ago 0.0.0.0:84->8080/tcp, 0.0.0.0:447->8443/tcp quay03",
"sudo podman stop ec16ece208c0",
"sudo podman stop 7ae0c9a8b37d",
"sudo podman stop e75c4aebfee9",
"sudo podman pull registry.redhat.io/quay/quay-rhel8:{productminv}",
"sudo podman pull registry.redhat.io/quay/quay-rhel8:v{producty}",
"sudo podman pull registry.redhat.io/quay/quay-rhel8:{productminv}",
"sudo podman run --restart=always -p 443:8443 -p 80:8080 --sysctl net.core.somaxconn=4096 --name=quay01 -v /mnt/quay/config:/conf/stack:Z -v /mnt/quay/storage:/datastorage:Z -d registry.redhat.io/quay/quay-rhel8:{productminv}",
"sudo podman ps",
"CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 70b9f38c3fb4 registry.redhat.io/quay/quay-rhel8:v{producty} registry 2 seconds ago Up 2 seconds ago 0.0.0.0:82->8080/tcp, 0.0.0.0:445->8443/tcp quay01",
"sudo podman run --restart=always -p 443:8443 -p 80:8080 --sysctl net.core.somaxconn=4096 --name=quay02 -v /mnt/quay/config:/conf/stack:Z -v /mnt/quay/storage:/datastorage:Z -d registry.redhat.io/quay/quay-rhel8:{productminv}",
"sudo podman run --restart=always -p 443:8443 -p 80:8080 --sysctl net.core.somaxconn=4096 --name=quay03 -v /mnt/quay/config:/conf/stack:Z -v /mnt/quay/storage:/datastorage:Z -d registry.redhat.io/quay/quay-rhel8:{productminv}",
"sudo podman ps"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3/html/deploy_red_hat_quay_-_high_availability/upgrading-geo-repl-quay |
Appendix B. Metadata Server daemon configuration Reference | Appendix B. Metadata Server daemon configuration Reference Refer to this list of commands that can be used for the Metadata Server (MDS) daemon configuration. mon_force_standby_active Description If set to true , monitors force MDS in standby replay mode to be active. Set under the [mon] or [global] section in the Ceph configuration file. Type Boolean Default true max_mds Description The number of active MDS daemons during cluster creation. Set under the [mon] or [global] section in the Ceph configuration file. Type 32-bit Integer Default 1 mds_cache_memory_limit Description The memory limit the MDS enforces for its cache. Red Hat recommends using this parameter instead of the mds cache size parameter. Type 64-bit Integer Unsigned Default 1073741824 mds_cache_reservation Description The cache reservation, memory or inodes, for the MDS cache to maintain. The value is a percentage of the maximum cache configured. Once the MDS begins dipping into its reservation, it recalls client state until its cache size shrinks to restore the reservation. Type Float Default 0.05 mds_cache_size Description The number of inodes to cache. A value of 0 indicates an unlimited number. Red Hat recommends to use the mds_cache_memory_limit to limit the amount of memory the MDS cache uses. Type 32-bit Integer Default 0 mds_cache_mid Description The insertion point for new items in the cache LRU, from the top. Type Float Default 0.7 mds_dir_commit_ratio Description The fraction of directory that contains erroneous information before Ceph commits using a full update instead of partial update. Type Float Default 0.5 mds_dir_max_commit_size Description The maximum size of a directory update in MB before Ceph breaks the directory into smaller transactions. Type 32-bit Integer Default 90 mds_decay_halflife Description The half-life of the MDS cache temperature. Type Float Default 5 mds_beacon_interval Description The frequency, in seconds, of beacon messages sent to the monitor. Type Float Default 4 mds_beacon_grace Description The interval without beacons before Ceph declares a MDS laggy and possibly replaces it. Type Float Default 15 mds_blocklist_interval Description The blocklist duration for failed MDS daemons in the OSD map. Type Float Default 24.0*60.0 mds_session_timeout Description The interval, in seconds, of client inactivity before Ceph times out capabilities and leases. Type Float Default 60 mds_session_autoclose Description The interval, in seconds, before Ceph closes a laggy client's session. Type Float Default 300 mds_reconnect_timeout Description The interval, in seconds, to wait for clients to reconnect during a MDS restart. Type Float Default 45 mds_tick_interval Description How frequently the MDS performs internal periodic tasks. Type Float Default 5 mds_dirstat_min_interval Description The minimum interval, in seconds, to try to avoid propagating recursive statistics up the tree. Type Float Default 1 mds_scatter_nudge_interval Description How quickly changes in directory statistics propagate up. Type Float Default 5 mds_client_prealloc_inos Description The number of inode numbers to preallocate per client session. Type 32-bit Integer Default 1000 mds_early_reply Description Determines whether the MDS allows clients to see request results before they commit to the journal. Type Boolean Default true mds_use_tmap Description Use trivialmap for directory updates. Type Boolean Default true mds_default_dir_hash Description The function to use for hashing files across directory fragments. Type 32-bit Integer Default 2 ,that is, rjenkins mds_log Description Set to true if the MDS should journal metadata updates. Disable for benchmarking only. Type Boolean Default true mds_log_skip_corrupt_events Description Determines whether the MDS tries to skip corrupt journal events during journal replay. Type Boolean Default false mds_log_max_events Description The maximum events in the journal before Ceph initiates trimming. Set to -1 to disable limits. Type 32-bit Integer Default -1 mds_log_max_segments Description The maximum number of segments or objects in the journal before Ceph initiates trimming. Set to -1 to disable limits. Type 32-bit Integer Default 30 mds_log_max_expiring Description The maximum number of segments to expire in parallels. Type 32-bit Integer Default 20 mds_log_eopen_size Description The maximum number of inodes in an EOpen event. Type 32-bit Integer Default 100 mds_bal_sample_interval Description Determines how frequently to sample directory temperature when making fragmentation decisions. Type Float Default 3 mds_bal_replicate_threshold Description The maximum temperature before Ceph attempts to replicate metadata to other nodes. Type Float Default 8000 mds_bal_unreplicate_threshold Description The minimum temperature before Ceph stops replicating metadata to other nodes. Type Float Default 0 mds_bal_frag Description Determines whether or not the MDS fragments directories. Type Boolean Default false mds_bal_split_size Description The maximum directory size before the MDS splits a directory fragment into smaller bits. The root directory has a default fragment size limit of 10000. Type 32-bit Integer Default 10000 mds_bal_split_rd Description The maximum directory read temperature before Ceph splits a directory fragment. Type Float Default 25000 mds_bal_split_wr Description The maximum directory write temperature before Ceph splits a directory fragment. Type Float Default 10000 mds_bal_split_bits Description The number of bits by which to split a directory fragment. Type 32-bit Integer Default 3 mds_bal_merge_size Description The minimum directory size before Ceph tries to merge adjacent directory fragments. Type 32-bit Integer Default 50 mds_bal_merge_rd Description The minimum read temperature before Ceph merges adjacent directory fragments. Type Float Default 1000 mds_bal_merge_wr Description The minimum write temperature before Ceph merges adjacent directory fragments. Type Float Default 1000 mds_bal_interval Description The frequency, in seconds, of workload exchanges between MDS nodes. Type 32-bit Integer Default 10 mds_bal_fragment_interval Description The frequency, in seconds, of adjusting directory fragmentation. Type 32-bit Integer Default 5 mds_bal_idle_threshold Description The minimum temperature before Ceph migrates a subtree back to its parent. Type Float Default 0 mds_bal_max Description The number of iterations to run balancer before Ceph stops. For testing purposes only. Type 32-bit Integer Default -1 mds_bal_max_until Description The number of seconds to run balancer before Ceph stops. For testing purposes only. Type 32-bit Integer Default -1 mds_bal_mode Description The method for calculating MDS load: 1 = Hybrid. 2 = Request rate and latency. 3 = CPU load. Type 32-bit Integer Default 0 mds_bal_min_rebalance Description The minimum subtree temperature before Ceph migrates. Type Float Default 0.1 mds_bal_min_start Description The minimum subtree temperature before Ceph searches a subtree. Type Float Default 0.2 mds_bal_need_min Description The minimum fraction of target subtree size to accept. Type Float Default 0.8 mds_bal_need_max Description The maximum fraction of target subtree size to accept. Type Float Default 1.2 mds_bal_midchunk Description Ceph migrates any subtree that is larger than this fraction of the target subtree size. Type Float Default 0.3 mds_bal_minchunk Description Ceph ignores any subtree that is smaller than this fraction of the target subtree size. Type Float Default 0.001 mds_bal_target_removal_min Description The minimum number of balancer iterations before Ceph removes an old MDS target from the MDS map. Type 32-bit Integer Default 5 mds_bal_target_removal_max Description The maximum number of balancer iterations before Ceph removes an old MDS target from the MDS map. Type 32-bit Integer Default 10 mds_replay_interval Description The journal poll interval when in standby-replay mode for a hot standby . Type Float Default 1 mds_shutdown_check Description The interval for polling the cache during MDS shutdown. Type 32-bit Integer Default 0 mds_thrash_exports Description Ceph randomly exports subtrees between nodes. For testing purposes only. Type 32-bit Integer Default 0 mds_thrash_fragments Description Ceph randomly fragments or merges directories. Type 32-bit Integer Default 0 mds_dump_cache_on_map Description Ceph dumps the MDS cache contents to a file on each MDS map. Type Boolean Default false mds_dump_cache_after_rejoin Description Ceph dumps MDS cache contents to a file after rejoining the cache during recovery. Type Boolean Default false mds_verify_scatter Description Ceph asserts that various scatter/gather invariants are true . For developer use only. Type Boolean Default false mds_debug_scatterstat Description Ceph asserts that various recursive statistics invariants are true . For developer use only. Type Boolean Default false mds_debug_frag Description Ceph verifies directory fragmentation invariants when convenient. For developer use only. Type Boolean Default false mds_debug_auth_pins Description The debug authentication pin invariants. For developer use only. Type Boolean Default false mds_debug_subtrees Description Debugging subtree invariants. For developer use only. Type Boolean Default false mds_kill_mdstable_at Description Ceph injects a MDS failure in a MDS Table code. For developer use only. Type 32-bit Integer Default 0 mds_kill_export_at Description Ceph injects a MDS failure in the subtree export code. For developer use only. Type 32-bit Integer Default 0 mds_kill_import_at Description Ceph injects a MDS failure in the subtree import code. For developer use only. Type 32-bit Integer Default 0 mds_kill_link_at Description Ceph injects a MDS failure in a hard link code. For developer use only. Type 32-bit Integer Default 0 mds_kill_rename_at Description Ceph injects a MDS failure in the rename code. For developer use only. Type 32-bit Integer Default 0 mds_wipe_sessions Description Ceph deletes all client sessions on startup. For testing purposes only. Type Boolean Default 0 mds_wipe_ino_prealloc Description Ceph deletes inode preallocation metadata on startup. For testing purposes only. Type Boolean Default 0 mds_skip_ino Description The number of inode numbers to skip on startup. For testing purposes only. Type 32-bit Integer Default 0 mds_standby_for_name Description The MDS daemon is a standby for another MDS daemon of the name specified in this setting. Type String Default N/A mds_standby_for_rank Description An instance of the MDS daemon is a standby for another MDS daemon instance of this rank. Type 32-bit Integer Default -1 mds_standby_replay Description Determines whether the MDS daemon polls and replays the log of an active MDS when used as a hot standby . Type Boolean Default false | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/file_system_guide/metadata-server-daemon-configuration-reference_fs |
Chapter 3. Introducing Enterprise Integration Patterns | Chapter 3. Introducing Enterprise Integration Patterns Abstract The Apache Camel's Enterprise Integration Patterns are inspired by a book of the same name written by Gregor Hohpe and Bobby Woolf. The patterns described by these authors provide an excellent toolbox for developing enterprise integration projects. In addition to providing a common language for discussing integration architectures, many of the patterns can be implemented directly using Apache Camel's programming interfaces and XML configuration. 3.1. Overview of the Patterns Enterprise Integration Patterns book Apache Camel supports most of the patterns from the book, Enterprise Integration Patterns by Gregor Hohpe and Bobby Woolf. Messaging systems The messaging systems patterns, shown in Table 3.1, "Messaging Systems" , introduce the fundamental concepts and components that make up a messaging system. Table 3.1. Messaging Systems Icon Name Use Case Figure 5.1, "Message Pattern" How can two applications connected by a message channel exchange a piece of information? Figure 5.2, "Message Channel Pattern" How does one application communicate with another application using messaging? Figure 5.3, "Message Endpoint Pattern" How does an application connect to a messaging channel to send and receive messages? Figure 5.4, "Pipes and Filters Pattern" How can we perform complex processing on a message while still maintaining independence and flexibility? Figure 5.7, "Message Router Pattern" How can you decouple individual processing steps so that messages can be passed to different filters depending on a set of defined conditions? Figure 5.8, "Message Translator Pattern" How do systems using different data formats communicate with each other using messaging? Messaging channels A messaging channel is the basic component used for connecting the participants in a messaging system. The patterns in Table 3.2, "Messaging Channels" describe the different kinds of messaging channels available. Table 3.2. Messaging Channels Icon Name Use Case Figure 6.1, "Point to Point Channel Pattern" How can the caller be sure that exactly one receiver will receive the document or will perform the call? Figure 6.2, "Publish Subscribe Channel Pattern" How can the sender broadcast an event to all interested receivers? Figure 6.3, "Dead Letter Channel Pattern" What will the messaging system do with a message it cannot deliver? Figure 6.4, "Guaranteed Delivery Pattern" How does the sender make sure that a message will be delivered, even if the messaging system fails? Figure 6.5, "Message Bus Pattern" What is an architecture that enables separate, decoupled applications to work together, such that one or more of the applications can be added or removed without affecting the others? Message construction The message construction patterns, shown in Table 3.3, "Message Construction" , describe the various forms and functions of the messages that pass through the system. Table 3.3. Message Construction Icon Name Use Case the section called "Overview" How does a requestor identify the request that generated the received reply? Section 7.3, "Return Address" How does a replier know where to send the reply? Message routing The message routing patterns, shown in Table 3.4, "Message Routing" , describe various ways of linking message channels together, including various algorithms that can be applied to the message stream (without modifying the body of the message). Table 3.4. Message Routing Icon Name Use Case Section 8.1, "Content-Based Router" How do we handle a situation where the implementation of a single logical function (for example, inventory check) is spread across multiple physical systems? Section 8.2, "Message Filter" How does a component avoid receiving uninteresting messages? Section 8.3, "Recipient List" How do we route a message to a list of dynamically specified recipients? Section 8.4, "Splitter" How can we process a message if it contains multiple elements, each of which might have to be processed in a different way? Section 8.5, "Aggregator" How do we combine the results of individual, but related messages so that they can be processed as a whole? Section 8.6, "Resequencer" How can we get a stream of related, but out-of-sequence, messages back into the correct order? Section 8.14, "Composed Message Processor" How can you maintain the overall message flow when processing a message consisting of multiple elements, each of which may require different processing? Section 8.15, "Scatter-Gather" How do you maintain the overall message flow when a message needs to be sent to multiple recipients, each of which may send a reply? Section 8.7, "Routing Slip" How do we route a message consecutively through a series of processing steps when the sequence of steps is not known at design-time, and might vary for each message? Section 8.8, "Throttler" How can I throttle messages to ensure that a specific endpoint does not get overloaded, or that we don't exceed an agreed SLA with some external service? Section 8.9, "Delayer" How can I delay the sending of a message? Section 8.10, "Load Balancer" How can I balance load across a number of endpoints? Section 8.11, "Hystrix" How can I use a Hystrix circuit breaker when calling an external service? New in Camel 2.18. Section 8.12, "Service Call" How can I call a remote service in a distributed system by looking up the service in a registry? New in Camel 2.18. Section 8.13, "Multicast" How can I route a message to a number of endpoints at the same time? Section 8.16, "Loop" How can I repeat processing a message in a loop? Section 8.17, "Sampling" How can I sample one message out of many in a given period to avoid overloading a downstream route? Message transformation The message transformation patterns, shown in Table 3.5, "Message Transformation" , describe how to modify the contents of messages for various purposes. Table 3.5. Message Transformation Icon Name Use Case Section 10.1, "Content Enricher" How do I communicate with another system if the message originator does not have all required data items? Section 10.2, "Content Filter" How do you simplify dealing with a large message, when you are interested in only a few data items? Section 10.4, "Claim Check EIP" How can we reduce the data volume of messages sent across the system without sacrificing information content? Section 10.3, "Normalizer" How do you process messages that are semantically equivalent, but arrive in a different format? Section 10.5, "Sort" How can I sort the body of a message? Messaging endpoints A messaging endpoint denotes the point of contact between a messaging channel and an application. The messaging endpoint patterns, shown in Table 3.6, "Messaging Endpoints" , describe various features and qualities of service that can be configured on an endpoint. Table 3.6. Messaging Endpoints Icon Name Use Case Section 11.1, "Messaging Mapper" How do you move data between domain objects and the messaging infrastructure while keeping the two independent of each other? Section 11.2, "Event Driven Consumer" How can an application automatically consume messages as they become available? Section 11.3, "Polling Consumer" How can an application consume a message when the application is ready? Section 11.4, "Competing Consumers" How can a messaging client process multiple messages concurrently? Section 11.5, "Message Dispatcher" How can multiple consumers on a single channel coordinate their message processing? Section 11.6, "Selective Consumer" How can a message consumer select which messages it wants to receive? Section 11.7, "Durable Subscriber" How can a subscriber avoid missing messages when it's not listening for them? Section 11.8, "Idempotent Consumer" How can a message receiver deal with duplicate messages? Section 11.9, "Transactional Client" How can a client control its transactions with the messaging system? Section 11.10, "Messaging Gateway" How do you encapsulate access to the messaging system from the rest of the application? Section 11.11, "Service Activator" How can an application design a service to be invoked by various messaging technologies as well as by non-messaging techniques? System management The system management patterns, shown in Table 3.7, "System Management" , describe how to monitor, test, and administer a messaging system. Table 3.7. System Management Icon Name Use Case Chapter 12, System Management How do you inspect messages that travel on a point-to-point channel? | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/introtoeip |
function::print_ubacktrace_brief | function::print_ubacktrace_brief Name function::print_ubacktrace_brief - Print stack back trace for current user-space task. Synopsis Arguments None Description Equivalent to print_ubacktrace , but output for each symbol is shorter (just name and offset, or just the hex address of no symbol could be found). Note To get (full) backtraces for user space applications and shared shared libraries not mentioned in the current script run stap with -d /path/to/exe-or-so and/or add --ldd to load all needed unwind data. | [
"print_ubacktrace_brief()"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-print-ubacktrace-brief |
probe::tcp.receive | probe::tcp.receive Name probe::tcp.receive - Called when a TCP packet is received Synopsis tcp.receive Values psh TCP PSH flag ack TCP ACK flag daddr A string representing the destination IP address syn TCP SYN flag rst TCP RST flag sport TCP source port protocol Packet protocol from driver urg TCP URG flag name Name of the probe point family IP address family fin TCP FIN flag saddr A string representing the source IP address iphdr IP header address dport TCP destination port | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-tcp-receive |
10.5. Configuration Directives in httpd.conf | 10.5. Configuration Directives in httpd.conf The Apache HTTP Server configuration file is /etc/httpd/conf/httpd.conf . The httpd.conf file is well-commented and mostly self-explanatory. The default configuration works for most situations; however, it is a good idea to become familiar some of the more important configuration options. Warning With the release of Apache HTTP Server 2.0, many configuration options have changed. If migrating a version 1.3 configuration file to the 2.0 format, refer to Section 10.2, "Migrating Apache HTTP Server 1.3 Configuration Files" . 10.5.1. General Configuration Tips If configuring the Apache HTTP Server, edit /etc/httpd/conf/httpd.conf and then either reload, restart, or stop and start the httpd process as outlined in Section 10.4, "Starting and Stopping httpd " . Before editing httpd.conf , make a copy the original file. Creating a backup makes it easier to recover from mistakes made while editing the configuration file. If a mistake is made and the Web server does not work correctly, first review recently edited passages in httpd.conf to verify there are no typos. look in the Web server's error log, /var/log/httpd/error_log . The error log may not be easy to interpret, depending on your level of expertise. However, the last entries in the error log should provide useful information. The following subsections contain a list of short descriptions for many of the directives included in httpd.conf . These descriptions are not exhaustive. For more information, refer to the Apache documentation online at http://httpd.apache.org/docs-2.0/ . For more information about mod_ssl directives, refer to the documentation online at http://httpd.apache.org/docs-2.0/mod/mod_ssl.html . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-apache-config |
Chapter 22. Setting the due date and priority of a task | Chapter 22. Setting the due date and priority of a task You can set the priority, due date, and time of a task in Business Central from the Task Inbox page. Note that all users may not have permissions for setting priority and the due date of a task. Procedure In Business Central, go to Menu Track Task Inbox . On the Task Inbox page, click the task to open it. On the task page, click the Details tab. In the Due Date field, select the required date from the calendar and the due time from the drop-down list. In the Priority field, select the required priority. Click Update . | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/interacting-with-processes-setting-date-priority-proc |
Chapter 3. Red Hat certification self check (rhcert/selfcheck) | Chapter 3. Red Hat certification self check (rhcert/selfcheck) The Red Hat Certification self check test also known as rhcert/selfcheck confirms that all the software packages required in the certification process are installed and that they have not been altered. This ensures that the test environment is ready for the certification process and that all the certification software packages are supportable. Success criteria The test environment includes the required certification packages process and their dependencies. The required certification packages have not been modified. | null | https://docs.redhat.com/en/documentation/red_hat_certified_cloud_and_service_provider_certification/2025/html/red_hat_certified_cloud_and_service_provider_certification_for_red_hat_enterprise_linux_for_sap_images_policy_guide/con-rhcert-selfcheck_cloud-sap-pol-test-suite-version-architecture |
Data Grid downloads | Data Grid downloads Access the Data Grid Software Downloads on the Red Hat customer portal. Note You must have a Red Hat account to access and download Data Grid software. | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_cross-site_replication/rhdg-downloads_datagrid |
Chapter 11. Live migration | Chapter 11. Live migration 11.1. About live migration Live migration is the process of moving a running virtual machine (VM) to another node in the cluster without interrupting the virtual workload. Live migration enables smooth transitions during cluster upgrades or any time a node needs to be drained for maintenance or configuration changes. By default, live migration traffic is encrypted using Transport Layer Security (TLS). 11.1.1. Live migration requirements Live migration has the following requirements: The cluster must have shared storage with ReadWriteMany (RWX) access mode. The cluster must have sufficient RAM and network bandwidth. Note You must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation: The default number of migrations that can run in parallel in the cluster is 5. If a VM uses a host model CPU, the nodes must support the CPU. Configuring a dedicated Multus network for live migration is highly recommended. A dedicated network minimizes the effects of network saturation on tenant workloads during migration. 11.1.2. VM migration tuning You can adjust your cluster-wide live migration settings based on the type of workload and migration scenario. This enables you to control how many VMs migrate at the same time, the network bandwidth you want to use for each migration, and how long OpenShift Virtualization attempts to complete the migration before canceling the process. Configure these settings in the HyperConverged custom resource (CR). If you are migrating multiple VMs per node at the same time, set a bandwidthPerMigration limit to prevent a large or busy VM from using a large portion of the node's network bandwidth. By default, the bandwidthPerMigration value is 0 , which means unlimited. A large VM running a heavy workload (for example, database processing), with higher memory dirty rates, requires a higher bandwidth to complete the migration. Note Post copy mode, when enabled, triggers if the initial pre-copy phase does not complete within the defined timeout. During post copy, the VM CPUs pause on the source host while transferring the minimum required memory pages. Then the VM CPUs activate on the destination host, and the remaining memory pages transfer into the destination node at runtime. This can impact performance during the transfer. Post copy mode should not be used for critical data, or with unstable networks. 11.1.3. Common live migration tasks You can perform the following live migration tasks: Configure live migration settings Configure live migration for heavy workloads Initiate and cancel live migration Monitor the progress of all live migrations in the Migration tab of the Red Hat OpenShift Service on AWS web console. View VM migration metrics in the Metrics tab of the web console. 11.1.4. Additional resources Prometheus queries for live migration VM run strategies VM and cluster eviction strategies 11.2. Configuring live migration You can configure live migration settings to ensure that the migration processes do not overwhelm the cluster. You can configure live migration policies to apply different migration configurations to groups of virtual machines (VMs). 11.2.1. Configuring live migration limits and timeouts Configure live migration limits and timeouts for the cluster by updating the HyperConverged custom resource (CR), which is located in the openshift-cnv namespace. Procedure Edit the HyperConverged CR and add the necessary live migration parameters: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Example configuration file apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: bandwidthPerMigration: 64Mi 1 completionTimeoutPerGiB: 800 2 parallelMigrationsPerCluster: 5 3 parallelOutboundMigrationsPerNode: 2 4 progressTimeout: 150 5 allowPostCopy: false 6 1 Bandwidth limit of each migration, where the value is the quantity of bytes per second. For example, a value of 2048Mi means 2048 MiB/s. Default: 0 , which is unlimited. 2 The migration is canceled if it has not completed in this time, in seconds per GiB of memory. For example, a VM with 6GiB memory times out if it has not completed migration in 4800 seconds. If the Migration Method is BlockMigration , the size of the migrating disks is included in the calculation. 3 Number of migrations running in parallel in the cluster. Default: 5 . 4 Maximum number of outbound migrations per node. Default: 2 . 5 The migration is canceled if memory copy fails to make progress in this time, in seconds. Default: 150 . 6 If a VM is running a heavy workload and the memory dirty rate is too high, this can prevent the migration from one node to another from converging. To prevent this, you can enable post copy mode. By default, allowPostCopy is set to false . Note You can restore the default value for any spec.liveMigrationConfig field by deleting that key/value pair and saving the file. For example, delete progressTimeout: <value> to restore the default progressTimeout: 150 . 11.2.2. Configure live migration for heavy workloads When migrating a VM running a heavy workload (for example, database processing) with higher memory dirty rates, you need a higher bandwidth to complete the migration. If the dirty rate is too high, the migration from one node to another does not converge. To prevent this, enable post copy mode. Post copy mode triggers if the initial pre-copy phase does not complete within the defined timeout. During post copy, the VM CPUs pause on the source host while transferring the minimum required memory pages. Then the VM CPUs activate on the destination host, and the remaining memory pages transfer into the destination node at runtime. Configure live migration for heavy workloads by updating the HyperConverged custom resource (CR), which is located in the openshift-cnv namespace. Procedure Edit the HyperConverged CR and add the necessary parameters for migrating heavy workloads: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Example configuration file apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: bandwidthPerMigration: 0Mi 1 completionTimeoutPerGiB: 150 2 parallelMigrationsPerCluster: 5 3 parallelOutboundMigrationsPerNode: 1 4 progressTimeout: 150 5 allowPostCopy: true 6 1 Bandwidth limit of each migration, where the value is the quantity of bytes per second. The default is 0 , which is unlimited. 2 The migration is canceled if it is not completed in this time, and triggers post copy mode, when post copy is enabled. This value is measured in seconds per GiB of memory. You can lower completionTimeoutPerGiB to trigger post copy mode earlier in the migration process, or raise the completionTimeoutPerGiB to trigger post copy mode later in the migration process. 3 Number of migrations running in parallel in the cluster. The default is 5 . Keeping the parallelMigrationsPerCluster setting low is better when migrating heavy workloads. 4 Maximum number of outbound migrations per node. Configure a single VM per node for heavy workloads. 5 The migration is canceled if memory copy fails to make progress in this time. This value is measured in seconds. Increase this parameter for large memory sizes running heavy workloads. 6 Use post copy mode when memory dirty rates are high to ensure the migration converges. Set allowPostCopy to true to enable post copy mode. Optional: If your main network is too busy for the migration, configure a secondary, dedicated migration network. Note Post copy mode can impact performance during the transfer, and should not be used for critical data, or with unstable networks. 11.2.3. Additional resources Configuring a dedicated network for live migration 11.2.4. Live migration policies You can create live migration policies to apply different migration configurations to groups of VMs that are defined by VM or project labels. Tip You can create live migration policies by using the Red Hat OpenShift Service on AWS web console. 11.2.4.1. Creating a live migration policy by using the command line You can create a live migration policy by using the command line. KubeVirt applies the live migration policy to selected virtual machines (VMs) by using any combination of labels: VM labels such as size , os , or gpu Project labels such as priority , bandwidth , or hpc-workload For the policy to apply to a specific group of VMs, all labels on the group of VMs must match the labels of the policy. Note If multiple live migration policies apply to a VM, the policy with the greatest number of matching labels takes precedence. If multiple policies meet this criteria, the policies are sorted by alphabetical order of the matching label keys, and the first one in that order takes precedence. Procedure Edit the VM object to which you want to apply a live migration policy, and add the corresponding VM labels. Open the YAML configuration of the resource: USD oc edit vm <vm_name> Adjust the required label values in the .spec.template.metadata.labels section of the configuration. For example, to mark the VM as a production VM for the purposes of migration policies, add the kubevirt.io/environment: production line: apiVersion: migrations.kubevirt.io/v1alpha1 kind: VirtualMachine metadata: name: <vm_name> namespace: default labels: app: my-app environment: production spec: template: metadata: labels: kubevirt.io/domain: <vm_name> kubevirt.io/size: large kubevirt.io/environment: production # ... Save and exit the configuration. Configure a MigrationPolicy object with the corresponding labels. The following example configures a policy that applies to all VMs that are labeled as production : apiVersion: migrations.kubevirt.io/v1alpha1 kind: MigrationPolicy metadata: name: <migration_policy> spec: selectors: namespaceSelector: 1 hpc-workloads: "True" xyz-workloads-type: "" virtualMachineInstanceSelector: 2 kubevirt.io/environment: "production" 1 Specify project labels. 2 Specify VM labels. Create the migration policy by running the following command: USD oc create -f <migration_policy>.yaml 11.2.5. Additional resources Configuring a dedicated Multus network for live migration 11.3. Initiating and canceling live migration You can initiate the live migration of a virtual machine (VM) to another node by using the Red Hat OpenShift Service on AWS web console or the command line . You can cancel a live migration by using the web console or the command line . The VM remains on its original node. Tip You can also initiate and cancel live migration by using the virtctl migrate <vm_name> and virtctl migrate-cancel <vm_name> commands. 11.3.1. Initiating live migration 11.3.1.1. Initiating live migration by using the web console You can live migrate a running virtual machine (VM) to a different node in the cluster by using the Red Hat OpenShift Service on AWS web console. Note The Migrate action is visible to all users but only cluster administrators can initiate a live migration. Prerequisites The VM must be migratable. If the VM is configured with a host model CPU, the cluster must have an available node that supports the CPU model. Procedure Navigate to Virtualization VirtualMachines in the web console. Select Migrate from the Options menu beside a VM. Click Migrate . 11.3.1.2. Initiating live migration by using the command line You can initiate the live migration of a running virtual machine (VM) by using the command line to create a VirtualMachineInstanceMigration object for the VM. Procedure Create a VirtualMachineInstanceMigration manifest for the VM that you want to migrate: apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: name: <migration_name> spec: vmiName: <vm_name> Create the object by running the following command: USD oc create -f <migration_name>.yaml The VirtualMachineInstanceMigration object triggers a live migration of the VM. This object exists in the cluster for as long as the virtual machine instance is running, unless manually deleted. Verification Obtain the VM status by running the following command: USD oc describe vmi <vm_name> -n <namespace> Example output # ... Status: Conditions: Last Probe Time: <nil> Last Transition Time: <nil> Status: True Type: LiveMigratable Migration Method: LiveMigration Migration State: Completed: true End Timestamp: 2018-12-24T06:19:42Z Migration UID: d78c8962-0743-11e9-a540-fa163e0c69f1 Source Node: node2.example.com Start Timestamp: 2018-12-24T06:19:35Z Target Node: node1.example.com Target Node Address: 10.9.0.18:43891 Target Node Domain Detected: true 11.3.2. Canceling live migration 11.3.2.1. Canceling live migration by using the web console You can cancel the live migration of a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console. Procedure Navigate to Virtualization VirtualMachines in the web console. Select Cancel Migration on the Options menu beside a VM. 11.3.2.2. Canceling live migration by using the command line Cancel the live migration of a virtual machine by deleting the VirtualMachineInstanceMigration object associated with the migration. Procedure Delete the VirtualMachineInstanceMigration object that triggered the live migration, migration-job in this example: USD oc delete vmim migration-job | [
"Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: bandwidthPerMigration: 64Mi 1 completionTimeoutPerGiB: 800 2 parallelMigrationsPerCluster: 5 3 parallelOutboundMigrationsPerNode: 2 4 progressTimeout: 150 5 allowPostCopy: false 6",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: bandwidthPerMigration: 0Mi 1 completionTimeoutPerGiB: 150 2 parallelMigrationsPerCluster: 5 3 parallelOutboundMigrationsPerNode: 1 4 progressTimeout: 150 5 allowPostCopy: true 6",
"oc edit vm <vm_name>",
"apiVersion: migrations.kubevirt.io/v1alpha1 kind: VirtualMachine metadata: name: <vm_name> namespace: default labels: app: my-app environment: production spec: template: metadata: labels: kubevirt.io/domain: <vm_name> kubevirt.io/size: large kubevirt.io/environment: production",
"apiVersion: migrations.kubevirt.io/v1alpha1 kind: MigrationPolicy metadata: name: <migration_policy> spec: selectors: namespaceSelector: 1 hpc-workloads: \"True\" xyz-workloads-type: \"\" virtualMachineInstanceSelector: 2 kubevirt.io/environment: \"production\"",
"oc create -f <migration_policy>.yaml",
"apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: name: <migration_name> spec: vmiName: <vm_name>",
"oc create -f <migration_name>.yaml",
"oc describe vmi <vm_name> -n <namespace>",
"Status: Conditions: Last Probe Time: <nil> Last Transition Time: <nil> Status: True Type: LiveMigratable Migration Method: LiveMigration Migration State: Completed: true End Timestamp: 2018-12-24T06:19:42Z Migration UID: d78c8962-0743-11e9-a540-fa163e0c69f1 Source Node: node2.example.com Start Timestamp: 2018-12-24T06:19:35Z Target Node: node1.example.com Target Node Address: 10.9.0.18:43891 Target Node Domain Detected: true",
"oc delete vmim migration-job"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/virtualization/live-migration |
Chapter 7. Known issues | Chapter 7. Known issues 7.1. Issues with starting a new workspace from a URL that points to a branch of a repository that doesn't have a devfile There is a known issue affecting repositories without a devfile.yaml file. If you start a new workspace from a branch of such repository, the default branch (e.g. 'main') is used for project cloning instead of the expected branch. Additional resources CRW-6860 7.2. Refresh token mode causes cyclic reload of the workspace start page There is a known issue when experimental refresh token mode is applied using the CHE_FORCE_REFRESH_PERSONAL_ACCESS_TOKEN property for the 'GitHub' and 'Azure DevOps' OAuth providers. This causes the workspace starts to reload the dashboard cyclically, creating a new personal access token on each page restart. The refresh token mode works correctly for 'GitLab' and 'BitBucket' OAuth providers. Additional resources CRW-6859 7.3. Workspace creation failure for GitHub Enterprise public repositories with no PAT or OAuth configuration There is a known issue with creating a workspace from GitHub Enterprise public repositories that have no personal access token (PAT) or OAuth configured. If you try to create a workspace from such a repository, you receive the following error message: "Failed to create the workspace. Cannot build factory with any of the provided parameters. Please check parameters correctness, and resend query." Workaround Add a PAT of the Git provider, or configure the OAuth. Additional resources CRW-6831 7.4. Ansible Lightspeed not connecting to Ansible server There is a known issue with Ansible Lightspeed and connection to the Ansible server. If the OpenShift Dev Spaces environment is not under *.openshiftapps.com domain, Ansible Lightspeed can not connect to the Ansible server. There is no workaround available. Additional resources CRW-5691 7.5. FIPS compliance update There's a known issue with FIPS compliance that results in certain cryptographic modules not being FIPS-validated. Below is a list of requirements and limitations for using FIPS with OpenShift Dev Spaces: Required cluster and operator updates Update your Red Hat OpenShift Container Platform installation to the latest z-stream update for 4.11, 4.12, or 4.13 as appropriate. If you do not already have FIPS enabled, you will need to uninstall and reinstall. Once the cluster is up and running, install OpenShift Dev Spaces 3.7.1 (3.7-264) and verify that the latest DevWorkspace operator bundle 0.21.2 (0.21-7) or newer is also installed and updated. See https://catalog.redhat.com/software/containers/devworkspace/devworkspace-operator-bundle/60ec9f48744684587e2186a3 Golang compiler in UDI image The Universal Developer Image (UDI) container includes a golang compiler, which was built without the CGO_ENABLED=1 flag. The check-payload scanner ( https://github.com/openshift/check-payload ) will throw an error, but this can be safely ignored provided that anything you build with this compiler sets the correct flag CGO_ENABLED=1 and does NOT use extldflags -static or -tags no_openssl . The resulting binaries can be scanned and should pass without error. Statically linked binaries You can find statically linked binaries not related to cryptography in these two containers: code-rhel8 idea-rhel8. As they are not related to cryptography, they do not affect FIPS compliance. Helm support for FIPS The UDI container includes the helm binary, which was not compiled with FIPS support. If you are in a FIPS environment do not use helm . Additional resources CRW-4598 7.6. Debugger does not work in the .NET sample Currently, the debugger in Microsoft Visual Studio Code - Open Source does not work in the .NET sample. Workaround Use a different image from the following sources: Custom UBI-9 based Dockerfile devfile.yaml Additional resources CRW-3563 | null | https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.15/html/release_notes_and_known_issues/known-issues |
Chapter 31. endpoint | Chapter 31. endpoint This chapter describes the commands under the endpoint command. 31.1. endpoint add project Associate a project to an endpoint Usage: Table 31.1. Positional Arguments Value Summary <endpoint> Endpoint to associate with specified project (name or ID) <project> Project to associate with specified endpoint name or ID) Table 31.2. Optional Arguments Value Summary -h, --help Show this help message and exit --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. 31.2. endpoint create Create new endpoint Usage: Table 31.3. Positional Arguments Value Summary <service> Service to be associated with new endpoint (name or ID) <interface> New endpoint interface type (admin, public or internal) <url> New endpoint url Table 31.4. Optional Arguments Value Summary -h, --help Show this help message and exit --region <region-id> New endpoint region id --enable Enable endpoint (default) --disable Disable endpoint Table 31.5. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 31.6. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 31.7. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 31.8. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 31.3. endpoint delete Delete endpoint(s) Usage: Table 31.9. Positional Arguments Value Summary <endpoint-id> Endpoint(s) to delete (id only) Table 31.10. Optional Arguments Value Summary -h, --help Show this help message and exit 31.4. endpoint group add project Add a project to an endpoint group Usage: Table 31.11. Positional Arguments Value Summary <endpoint-group> Endpoint group (name or id) <project> Project to associate (name or id) Table 31.12. Optional Arguments Value Summary -h, --help Show this help message and exit --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. 31.5. endpoint group create Create new endpoint group Usage: Table 31.13. Positional Arguments Value Summary <name> Name of the endpoint group <filename> Filename that contains a new set of filters Table 31.14. Optional Arguments Value Summary -h, --help Show this help message and exit --description DESCRIPTION Description of the endpoint group Table 31.15. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 31.16. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 31.17. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 31.18. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 31.6. endpoint group delete Delete endpoint group(s) Usage: Table 31.19. Positional Arguments Value Summary <endpoint-group> Endpoint group(s) to delete (name or id) Table 31.20. Optional Arguments Value Summary -h, --help Show this help message and exit 31.7. endpoint group list List endpoint groups Usage: Table 31.21. Optional Arguments Value Summary -h, --help Show this help message and exit --endpointgroup <endpoint-group> Endpoint group (name or id) --project <project> Project (name or id) --domain <domain> Domain owning <project> (name or id) Table 31.22. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 31.23. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 31.24. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 31.25. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 31.8. endpoint group remove project Remove project from endpoint group Usage: Table 31.26. Positional Arguments Value Summary <endpoint-group> Endpoint group (name or id) <project> Project to remove (name or id) Table 31.27. Optional Arguments Value Summary -h, --help Show this help message and exit --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. 31.9. endpoint group set Set endpoint group properties Usage: Table 31.28. Positional Arguments Value Summary <endpoint-group> Endpoint group to modify (name or id) Table 31.29. Optional Arguments Value Summary -h, --help Show this help message and exit --name <name> New enpoint group name --filters <filename> Filename that contains a new set of filters --description <description> New endpoint group description 31.10. endpoint group show Display endpoint group details Usage: Table 31.30. Positional Arguments Value Summary <endpointgroup> Endpoint group (name or id) Table 31.31. Optional Arguments Value Summary -h, --help Show this help message and exit Table 31.32. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 31.33. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 31.34. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 31.35. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 31.11. endpoint list List endpoints Usage: Table 31.36. Optional Arguments Value Summary -h, --help Show this help message and exit --service <service> Filter by service (type, name or id) --interface <interface> Filter by interface type (admin, public or internal) --region <region-id> Filter by region id --endpoint <endpoint-group> Endpoint to list filters --project <project> Project to list filters (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 31.37. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 31.38. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 31.39. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 31.40. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 31.12. endpoint remove project Dissociate a project from an endpoint Usage: Table 31.41. Positional Arguments Value Summary <endpoint> Endpoint to dissociate from specified project (name or ID) <project> Project to dissociate from specified endpoint name or ID) Table 31.42. Optional Arguments Value Summary -h, --help Show this help message and exit --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. 31.13. endpoint set Set endpoint properties Usage: Table 31.43. Positional Arguments Value Summary <endpoint-id> Endpoint to modify (id only) Table 31.44. Optional Arguments Value Summary -h, --help Show this help message and exit --region <region-id> New endpoint region id --interface <interface> New endpoint interface type (admin, public or internal) --url <url> New endpoint url --service <service> New endpoint service (name or id) --enable Enable endpoint --disable Disable endpoint 31.14. endpoint show Display endpoint details Usage: Table 31.45. Positional Arguments Value Summary <endpoint> Endpoint to display (endpoint id, service id, service name, service type) Table 31.46. Optional Arguments Value Summary -h, --help Show this help message and exit Table 31.47. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 31.48. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 31.49. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 31.50. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack endpoint add project [-h] [--project-domain <project-domain>] <endpoint> <project>",
"openstack endpoint create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--region <region-id>] [--enable | --disable] <service> <interface> <url>",
"openstack endpoint delete [-h] <endpoint-id> [<endpoint-id> ...]",
"openstack endpoint group add project [-h] [--project-domain <project-domain>] <endpoint-group> <project>",
"openstack endpoint group create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description DESCRIPTION] <name> <filename>",
"openstack endpoint group delete [-h] <endpoint-group> [<endpoint-group> ...]",
"openstack endpoint group list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--endpointgroup <endpoint-group> | --project <project>] [--domain <domain>]",
"openstack endpoint group remove project [-h] [--project-domain <project-domain>] <endpoint-group> <project>",
"openstack endpoint group set [-h] [--name <name>] [--filters <filename>] [--description <description>] <endpoint-group>",
"openstack endpoint group show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <endpointgroup>",
"openstack endpoint list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--service <service>] [--interface <interface>] [--region <region-id>] [--endpoint <endpoint-group> | --project <project>] [--project-domain <project-domain>]",
"openstack endpoint remove project [-h] [--project-domain <project-domain>] <endpoint> <project>",
"openstack endpoint set [-h] [--region <region-id>] [--interface <interface>] [--url <url>] [--service <service>] [--enable | --disable] <endpoint-id>",
"openstack endpoint show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <endpoint>"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/endpoint |
33.3. Emergency Mode | 33.3. Emergency Mode Emergency mode , provides the minimal bootable environment and allows you to repair your system even in situations when rescue mode is unavailable. In emergency mode , the system mounts only the root file system, and it is mounted as read-only. Also, the system does not activate any network interfaces and only a minimum of the essential services are set up. The system does not load any init scripts, therefore you can still mount file systems to recover data that would be lost during a re-installation if init is corrupted or not working. To boot into emergency mode follow this procedure: Procedure 33.3. Booting into Emergency Mode At the GRUB boot screen, press any key to enter the GRUB interactive menu. Select Red Hat Enterprise Linux with the version of the kernel that you want to boot and press the a to append the line. Type emergency as a separate word at the end of the line and press Enter to exit GRUB edit mode. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-emergency_mode |
Operator APIs | Operator APIs OpenShift Container Platform 4.17 Reference guide for Operator APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/operator_apis/index |
Chapter 20. KubeStorageVersionMigrator [operator.openshift.io/v1] | Chapter 20. KubeStorageVersionMigrator [operator.openshift.io/v1] Description KubeStorageVersionMigrator provides information to configure an operator to manage kube-storage-version-migrator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 20.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object status object 20.1.1. .spec Description Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides holds a sparse config that will override any previously set options. It only needs to be the fields to override it will end up overlaying in the following order: 1. hardcoded defaults 2. observedConfig 3. unsupportedConfigOverrides 20.1.2. .status Description Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 20.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 20.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 20.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 20.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 20.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/kubestorageversionmigrators DELETE : delete collection of KubeStorageVersionMigrator GET : list objects of kind KubeStorageVersionMigrator POST : create a KubeStorageVersionMigrator /apis/operator.openshift.io/v1/kubestorageversionmigrators/{name} DELETE : delete a KubeStorageVersionMigrator GET : read the specified KubeStorageVersionMigrator PATCH : partially update the specified KubeStorageVersionMigrator PUT : replace the specified KubeStorageVersionMigrator /apis/operator.openshift.io/v1/kubestorageversionmigrators/{name}/status GET : read status of the specified KubeStorageVersionMigrator PATCH : partially update status of the specified KubeStorageVersionMigrator PUT : replace status of the specified KubeStorageVersionMigrator 20.2.1. /apis/operator.openshift.io/v1/kubestorageversionmigrators Table 20.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of KubeStorageVersionMigrator Table 20.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 20.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind KubeStorageVersionMigrator Table 20.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 20.5. HTTP responses HTTP code Reponse body 200 - OK KubeStorageVersionMigratorList schema 401 - Unauthorized Empty HTTP method POST Description create a KubeStorageVersionMigrator Table 20.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.7. Body parameters Parameter Type Description body KubeStorageVersionMigrator schema Table 20.8. HTTP responses HTTP code Reponse body 200 - OK KubeStorageVersionMigrator schema 201 - Created KubeStorageVersionMigrator schema 202 - Accepted KubeStorageVersionMigrator schema 401 - Unauthorized Empty 20.2.2. /apis/operator.openshift.io/v1/kubestorageversionmigrators/{name} Table 20.9. Global path parameters Parameter Type Description name string name of the KubeStorageVersionMigrator Table 20.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a KubeStorageVersionMigrator Table 20.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 20.12. Body parameters Parameter Type Description body DeleteOptions schema Table 20.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified KubeStorageVersionMigrator Table 20.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 20.15. HTTP responses HTTP code Reponse body 200 - OK KubeStorageVersionMigrator schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified KubeStorageVersionMigrator Table 20.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.17. Body parameters Parameter Type Description body Patch schema Table 20.18. HTTP responses HTTP code Reponse body 200 - OK KubeStorageVersionMigrator schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified KubeStorageVersionMigrator Table 20.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.20. Body parameters Parameter Type Description body KubeStorageVersionMigrator schema Table 20.21. HTTP responses HTTP code Reponse body 200 - OK KubeStorageVersionMigrator schema 201 - Created KubeStorageVersionMigrator schema 401 - Unauthorized Empty 20.2.3. /apis/operator.openshift.io/v1/kubestorageversionmigrators/{name}/status Table 20.22. Global path parameters Parameter Type Description name string name of the KubeStorageVersionMigrator Table 20.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified KubeStorageVersionMigrator Table 20.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 20.25. HTTP responses HTTP code Reponse body 200 - OK KubeStorageVersionMigrator schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified KubeStorageVersionMigrator Table 20.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.27. Body parameters Parameter Type Description body Patch schema Table 20.28. HTTP responses HTTP code Reponse body 200 - OK KubeStorageVersionMigrator schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified KubeStorageVersionMigrator Table 20.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.30. Body parameters Parameter Type Description body KubeStorageVersionMigrator schema Table 20.31. HTTP responses HTTP code Reponse body 200 - OK KubeStorageVersionMigrator schema 201 - Created KubeStorageVersionMigrator schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/operator_apis/kubestorageversionmigrator-operator-openshift-io-v1 |
6.4. IPsec Host-to-Host Configuration | 6.4. IPsec Host-to-Host Configuration IPsec can be configured to connect one desktop or workstation to another by way of a host-to-host connection. This type of connection uses the network to which each host is connected to create the secure tunnel to each other. The requirements of a host-to-host connection are minimal, as is the configuration of IPsec on each host. The hosts need only a dedicated connection to a carrier network (such as the Internet) and Red Hat Enterprise Linux to create the IPsec connection. The first step in creating a connection is to gather system and network information from each workstation. For a host-to-host connection, you need the following information: The IP address for both hosts A unique name to identify the IPsec connection and distinguish it from other devices or connections (for example, ipsec0 ) A fixed encryption key or one automatically generated by racoon A pre-shared authentication key that is used to initiate the connection and exchange encryption keys during the session For example, suppose Workstation A and Workstation B want to connect to each other through an IPsec tunnel. They want to connect using a pre-shared key with the value of foobarbaz and the users agree to let racoon automatically generate and share an authentication key between each host. Both host users decide to name their connections ipsec0 . The following is the ifcfg file for Workstation A for a host-to-host IPsec connection with Workstation B (the unique name to identify the connection in this example is ipsec0 , so the resulting file is named /etc/sysconfig/network-scripts/ifcfg-ipsec0 ): Workstation A would replace X.X.X.X with the IP address of Workstation B, while Workstation B replaces X.X.X.X with the IP address of Workstation A. The connection is set to initiate upon boot-up ( ONBOOT=yes ) and uses the pre-shared key method of authentication ( IKE_METHOD=PSK ). The following is the content of the pre-shared key file (called /etc/sysconfig/network-scripts/keys-ipsec0 ) that both workstations need to authenticate each other. The contents of this file should be identical on both workstations and only the root user should be able to read or write this file. Important To change the keys-ipsec0 file so that only the root user can read or edit the file, perform the following command after creating the file: To change the authentication key at any time, edit the keys-ipsec0 file on both workstations. Both keys must be identical for proper connectivity . The example shows the specific configuration for the phase 1 connection to the remote host. The file is named X.X.X.X .conf ( X.X.X.X is replaced with the IP address of the remote IPsec router). Note that this file is automatically generated once the IPsec tunnel is activated and should not be edited directly. The default phase 1 configuration file created when an IPsec connection is initialized contains the following statements used by the Red Hat Enterprise Linux implementation of IPsec: remote X.X.X.X Specifies that the subsequent stanzas of this configuration file applies only to the remote node identified by the X.X.X.X IP address. exchange_mode aggressive The default configuration for IPsec on Red Hat Enterprise Linux uses an aggressive authentication mode, which lowers the connection overhead while allowing configuration of several IPsec connections with multiple hosts. my_identifier address Defines the identification method to be used when authenticating nodes. Red Hat Enterprise Linux uses IP addresses to identify nodes. encryption_algorithm 3des Defines the encryption cipher used during authentication. By default, Triple Data Encryption Standard ( 3DES ) is used. hash_algorithm sha1; Specifies the hash algorithm used during phase 1 negotiation between nodes. By default, Secure Hash Algorithm version 1 is used. authentication_method pre_shared_key Defines the authentication method used during node negotiation. Red Hat Enterprise Linux by default uses pre-shared keys for authentication. dh_group 2 Specifies the Diffie-Hellman group number for establishing dynamically-generated session keys. By default, the 1024-bit group is used. The /etc/racoon/racoon.conf files should be identical on all IPsec nodes except for the include "/etc/racoon/ X.X.X.X .conf" statement. This statement (and the file it references) is generated when the IPsec tunnel is activated. For Workstation A, the X.X.X.X in the include statement is Workstation B's IP address. The opposite is true of Workstation B. The following shows a typical racoon.conf file when IPsec connection is activated. This default racoon.conf file includes defined paths for IPsec configuration, pre-shared key files, and certificates. The fields in sainfo anonymous describe the phase 2 SA between the IPsec nodes - the nature of the IPsec connection (including the supported encryption algorithms used) and the method of exchanging keys. The following list defines the fields of phase 2: sainfo anonymous Denotes that SA can anonymously initialize with any peer insofar as the IPsec credentials match. pfs_group 2 Defines the Diffie-Hellman key exchange protocol, which determines the method in which the IPsec nodes establish a mutual temporary session key for the second phase of IPsec connectivity. By default, the Red Hat Enterprise Linux implementation of IPsec uses group 2 (or modp1024 ) of the Diffie-Hellman cryptographic key exchange groups. Group 2 uses a 1024-bit modular exponentiation that prevents attackers from decrypting IPsec transmissions even if a private key is compromised. lifetime time 1 hour This parameter specifies the life cycle of an SA and can be quantified either by time or by bytes of data. The Red Hat Enterprise Linux implementation of IPsec specifies a one hour lifetime. encryption_algorithm 3des, blowfish 448, rijndael Specifies the supported encryption ciphers for phase 2. Red Hat Enterprise Linux supports 3DES, 448-bit Blowfish, and Rijndael (the cipher used in the Advanced Encryption Standard , or AES ). authentication_algorithm hmac_sha1, hmac_md5 Lists the supported hash algorithms for authentication. Supported modes are sha1 and md5 hashed message authentication codes (HMAC). compression_algorithm deflate Defines the Deflate compression algorithm for IP Payload Compression (IPCOMP) support, which allows for potentially faster transmission of IP datagrams over slow connections. To start the connection, either reboot the workstation or execute the following command as root on each host: To test the IPsec connection, run the tcpdump utility to view the network packets being transfered between the hosts (or networks) and verify that they are encrypted via IPsec. The packet should include an AH header and should be shown as ESP packets. ESP means it is encrypted. For example: | [
"DST= X.X.X.X TYPE=IPSEC ONBOOT=yes IKE_METHOD=PSK",
"IKE_PSK=foobarbaz",
"chmod 600 /etc/sysconfig/network-scripts/keys-ipsec0",
"; remote X.X.X.X { exchange_mode aggressive, main; my_identifier address; proposal { encryption_algorithm 3des; hash_algorithm sha1; authentication_method pre_shared_key; dh_group 2 ; } }",
"Racoon IKE daemon configuration file. See 'man racoon.conf' for a description of the format and entries. path include \"/etc/racoon\"; path pre_shared_key \"/etc/racoon/psk.txt\"; path certificate \"/etc/racoon/certs\"; sainfo anonymous { pfs_group 2; lifetime time 1 hour ; encryption_algorithm 3des, blowfish 448, rijndael ; authentication_algorithm hmac_sha1, hmac_md5 ; compression_algorithm deflate ; } include \"/etc/racoon/ X.X.X.X .conf\"",
"/sbin/ifup ipsec0",
"17:13:20.617872 pinky.example.com > ijin.example.com: AH(spi=0x0aaa749f,seq=0x335): ESP(spi=0x0ec0441e,seq=0x335) (DF)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s1-ipsec-host2host |
Chapter 34. Desktop | Chapter 34. Desktop Broken pygobject3 package dependencies prevent upgrade from Red Hat Enterprise Linux 7.1 The pygobject3-devel.i686 32-bit package has been removed in Red Hat Enterprise Linux 7.2 and was replaced with a multilib version. If you have the 32-bit version of the package installed on a Red Hat Enterprise Linux 7.1 system, then you will encounter a yum error when attempting to upgrade to Red Hat Enterprise Linux 7.2. To work around this problem, use the yum remove pygobject3-devel.i686 command as root to uninstall the 32-bit version of the package before upgrading your system. Build requirements not defined correctly for Emacs The binutils package earlier than version 2.23.52.0.1-54 causes a segmentation fault during the build. As a consequence, it is not possible to build the Emacs text editor on IBM Power Systems. To work around this problem, install the latest binutils . External display issues when combining laptop un/dock and suspend In the GNOME desktop environment, with some laptops, external displays connected to a docking station might not be automatically activated when resuming a suspended laptop after it has been undocked and docked again. To work around this problem, open the Displays configuration panel or run the xrandr command in a terminal. This makes the external displays available again. Emacs sometimes terminates unexpectedly when using the up arrow on ARM On the ARM architecture, the Emacs text editor sometimes terminates unexpectedly with a segmentation fault when scrolling up a file buffer. This happens only when the syntax highlighting is enabled. There is not currently any known workaround for this problem. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.2_release_notes/known-issues-desktop |
Getting started with Automation Services Catalog | Getting started with Automation Services Catalog Red Hat Ansible Automation Platform 2.3 Initial configurations and workflows Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/getting_started_with_automation_services_catalog/index |
15.5. Live KVM Migration with virsh | 15.5. Live KVM Migration with virsh A guest virtual machine can be migrated to another host physical machine with the virsh command. The migrate command accepts parameters in the following format: Note that the --live option may be eliminated when live migration is not required. Additional options are listed in Section 15.5.2, "Additional Options for the virsh migrate Command" . The GuestName parameter represents the name of the guest virtual machine which you want to migrate. The DestinationURL parameter is the connection URL of the destination host physical machine. The destination system must run the same version of Red Hat Enterprise Linux, be using the same hypervisor and have libvirt running. Note The DestinationURL parameter for normal migration and peer2peer migration has different semantics: normal migration: the DestinationURL is the URL of the target host physical machine as seen from the source guest virtual machine. peer2peer migration: DestinationURL is the URL of the target host physical machine as seen from the source host physical machine. Once the command is entered, you will be prompted for the root password of the destination system. Important Name resolution must be working on both sides (source and destination) in order for migration to succeed. Each side must be able to find the other. Make sure that you can ping one side to the other to check that the name resolution is working. Example: live migration with virsh This example migrates from host1.example.com to host2.example.com . Change the host physical machine names for your environment. This example migrates a virtual machine named guest1-rhel6-64 . This example assumes you have fully configured shared storage and meet all the prerequisites (listed here: Migration requirements ). Verify the guest virtual machine is running From the source system, host1.example.com , verify guest1-rhel6-64 is running: Migrate the guest virtual machine Execute the following command to live migrate the guest virtual machine to the destination, host2.example.com . Append /system to the end of the destination URL to tell libvirt that you need full access. Once the command is entered you will be prompted for the root password of the destination system. Wait The migration may take some time depending on load and the size of the guest virtual machine. virsh only reports errors. The guest virtual machine continues to run on the source host physical machine until fully migrated. Verify the guest virtual machine has arrived at the destination host From the destination system, host2.example.com , verify guest1-rhel7-64 is running: The live migration is now complete. Note libvirt supports a variety of networking methods including TLS/SSL, UNIX sockets, SSH, and unencrypted TCP. For more information on using other methods, see Chapter 18, Remote Management of Guests . Note Non-running guest virtual machines can be migrated using the following command: 15.5.1. Additional Tips for Migration with virsh It is possible to perform multiple, concurrent live migrations where each migration runs in a separate command shell. However, this should be done with caution and should involve careful calculations as each migration instance uses one MAX_CLIENT from each side (source and target). As the default setting is 20, there is enough to run 10 instances without changing the settings. Should you need to change the settings, see the procedure Procedure 15.1, "Configuring libvirtd.conf" . Open the libvirtd.conf file as described in Procedure 15.1, "Configuring libvirtd.conf" . Look for the Processing controls section. Change the max_clients and max_workers parameters settings. It is recommended that the number be the same in both parameters. The max_clients will use 2 clients per migration (one per side) and max_workers will use 1 worker on the source and 0 workers on the destination during the perform phase and 1 worker on the destination during the finish phase. Important The max_clients and max_workers parameters settings are affected by all guest virtual machine connections to the libvirtd service. This means that any user that is using the same guest virtual machine and is performing a migration at the same time will also obey the limits set in the max_clients and max_workers parameters settings. This is why the maximum value needs to be considered carefully before performing a concurrent live migration. Important The max_clients parameter controls how many clients are allowed to connect to libvirt. When a large number of containers are started at once, this limit can be easily reached and exceeded. The value of the max_clients parameter could be increased to avoid this, but doing so can leave the system more vulnerable to denial of service (DoS) attacks against instances. To alleviate this problem, a new max_anonymous_clients setting has been introduced in Red Hat Enterprise Linux 7.0 that specifies a limit of connections which are accepted but not yet authenticated. You can implement a combination of max_clients and max_anonymous_clients to suit your workload. Save the file and restart the service. Note There may be cases where a migration connection drops because there are too many ssh sessions that have been started, but not yet authenticated. By default, sshd allows only 10 sessions to be in a "pre-authenticated state" at any time. This setting is controlled by the MaxStartups parameter in the sshd configuration file (located here: /etc/ssh/sshd_config ), which may require some adjustment. Adjusting this parameter should be done with caution as the limitation is put in place to prevent DoS attacks (and over-use of resources in general). Setting this value too high will negate its purpose. To change this parameter, edit the file /etc/ssh/sshd_config , remove the # from the beginning of the MaxStartups line, and change the 10 (default value) to a higher number. Remember to save the file and restart the sshd service. For more information, see the sshd_config man page. 15.5.2. Additional Options for the virsh migrate Command In addition to --live , virsh migrate accepts the following options: --direct - used for direct migration --p2p - used for peer-to-peer migration --tunneled - used for tunneled migration --offline - migrates domain definition without starting the domain on destination and without stopping it on source host. Offline migration may be used with inactive domains and it must be used with the --persistent option. --persistent - leaves the domain persistent on destination host physical machine --undefinesource - undefines the domain on the source host physical machine --suspend - leaves the domain paused on the destination host physical machine --change-protection - enforces that no incompatible configuration changes will be made to the domain while the migration is underway; this flag is implicitly enabled when supported by the hypervisor, but can be explicitly used to reject the migration if the hypervisor lacks change protection support. --unsafe - forces the migration to occur, ignoring all safety procedures. --verbose - displays the progress of migration as it is occurring --compressed - activates compression of memory pages that have to be transferred repeatedly during live migration. --abort-on-error - cancels the migration if a soft error (for example I/O error) happens during the migration. --domain [name] - sets the domain name, id or uuid. --desturi [URI] - connection URI of the destination host as seen from the client (normal migration) or source (p2p migration). --migrateuri [URI] - the migration URI, which can usually be omitted. --graphicsuri [URI] - graphics URI to be used for seamless graphics migration. --listen-address [address] - sets the listen address that the hypervisor on the destination side should bind to for incoming migration. --timeout [seconds] - forces a guest virtual machine to suspend when the live migration counter exceeds N seconds. It can only be used with a live migration. Once the timeout is initiated, the migration continues on the suspended guest virtual machine. --dname [newname] - is used for renaming the domain during migration, which also usually can be omitted --xml [filename] - the filename indicated can be used to supply an alternative XML file for use on the destination to supply a larger set of changes to any host-specific portions of the domain XML, such as accounting for naming differences between source and destination in accessing underlying storage. This option is usually omitted. --migrate-disks [disk_identifiers] - this option can be used to select which disks are copied during the migration. This allows for more efficient live migration when copying certain disks is undesirable, such as when they already exist on the destination, or when they are no longer useful. [disk_identifiers] should be replaced by a comma-separated list of disks to be migrated, identified by their arguments found in the <target dev= /> line of the Domain XML file. In addition, the following commands may help as well: virsh migrate-setmaxdowntime [domain] [downtime] - will set a maximum tolerable downtime for a domain which is being live-migrated to another host. The specified downtime is in milliseconds. The domain specified must be the same domain that is being migrated. virsh migrate-compcache [domain] --size - will set and or get the size of the cache in bytes which is used for compressing repeatedly transferred memory pages during a live migration. When the --size is not used the command displays the current size of the compression cache. When --size is used, and specified in bytes, the hypervisor is asked to change compression to match the indicated size, following which the current size is displayed. The --size argument is supposed to be used while the domain is being live migrated as a reaction to the migration progress and increasing number of compression cache misses obtained from the domjobinfo . virsh migrate-setspeed [domain] [bandwidth] - sets the migration bandwidth in Mib/sec for the specified domain which is being migrated to another host. virsh migrate-getspeed [domain] - gets the maximum migration bandwidth that is available in Mib/sec for the specified domain. For more information, see Migration Limitations or the virsh man page. | [
"virsh migrate --live GuestName DestinationURL",
"virsh list Id Name State ---------------------------------- 10 guest1-rhel6-64 running",
"virsh migrate --live guest1-rhel7-64 qemu+ssh://host2.example.com/system",
"virsh list Id Name State ---------------------------------- 10 guest1-rhel7-64 running",
"virsh migrate --offline --persistent",
"################################################################# # Processing controls # The maximum number of concurrent client connections to allow over all sockets combined. #max_clients = 5000 The maximum length of queue of connections waiting to be accepted by the daemon. Note, that some protocols supporting retransmission may obey this so that a later reattempt at connection succeeds. #max_queued_clients = 1000 The minimum limit sets the number of workers to start up initially. If the number of active clients exceeds this, then more threads are spawned, upto max_workers limit. Typically you'd want max_workers to equal maximum number of clients allowed #min_workers = 5 #max_workers = 20 The number of priority workers. If all workers from above pool will stuck, some calls marked as high priority (notably domainDestroy) can be executed in this pool. #prio_workers = 5 Total global limit on concurrent RPC calls. Should be at least as large as max_workers. Beyond this, RPC requests will be read into memory and queued. This directly impact memory usage, currently each request requires 256 KB of memory. So by default upto 5 MB of memory is used # XXX this isn't actually enforced yet, only the per-client limit is used so far #max_requests = 20 Limit on concurrent requests from a single client connection. To avoid one client monopolizing the server this should be a small fraction of the global max_requests and max_workers parameter #max_client_requests = 5 #################################################################"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-KVM_live_migration-Live_KVM_migration_with_virsh |
Chapter 4. Getting Started with Virtualization Command-line Interface | Chapter 4. Getting Started with Virtualization Command-line Interface The standard method of operating virtualization on Red Hat Enterprise Linux 7 is using the command-line user interface (CLI). Entering CLI commands activates system utilities that create or interact with virtual machines on the host system. This method offers more detailed control than using graphical applications such as virt-manager and provides opportunities for scripting and automation. 4.1. Primary Command-line Utilities for Virtualization The following subsections list the main command-line utilities you can use to set up and manage virtualization on Red Hat Enterprise Linux 7. These commands, as well as numerous other virtualization utilities, are included in packages provided by the Red Hat Enterprise Linux repositories and can be installed using the Yum package manager . For more information about installing virtualization packages, see the Virtualization Deployment and Administration Guide . 4.1.1. virsh virsh is a CLI utility for managing hypervisors and guest virtual machines. It is the primary means of controlling virtualization on Red Hat Enterprise Linux 7. Its capabilities include: Creating, configuring, pausing, listing, and shutting down virtual machines Managing virtual networks Loading virtual machine disk images The virsh utility is ideal for creating virtualization administration scripts. Users without root privileges can use virsh as well, but in read-only mode. Using virsh The virsh utility can be used in a standard command-line input, but also as an interactive shell. In shell mode, the virsh command prefix is not needed, and the user is always registered as root. The following example uses the virsh hostname command to display the hypervisor's host name - first in standard mode, then in interactive mode. Important When using virsh as a non-root user, you enter an unprivileged libvirt session , which means you cannot see or interact with guests or any other virtualized elements created by the root. To gain read-only access to the elements, use virsh with the -c qemu:///system option. Getting help with virsh Like with all Linux bash commands, you can obtain help with virsh by using the man virsh command or the --help option. In addition, the virsh help command can be used to view the help text of a specific virsh command, or, by using a keyword, to list all virsh commands that belong to a certain group. The virsh command groups and their respective keywords are as follows: Guest management - keyword domain Guest monitoring - keyword monitor Host and hypervisor monitoring and management- keyword host Host system network interface management - keyword interface Virtual network management - keyword network Network filter management - keyword filter Node device management - keyword nodedev Management of secrets, such as passphrases or encryption keys - keyword secret Snapshot management - keyword snapshot Storage pool management - keyword pool Storage volume management - keyword volume General virsh usage - keyword virsh In the following example, you need to learn how to rename a guest virtual machine. By using virsh help , you first find the proper command to use and then learn its syntax. Finally, you use the command to rename a guest called Fontaine to Atlas . Example 4.1. How to list help for all commands with a keyword Note For more information about managing virtual machines using virsh , see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . 4.1.2. virt-install virt-install is a CLI utility for creating new virtual machines. It supports both text-based and graphical installations, using serial console, SPICE, or VNC client-server pair graphics. Installation media can be local, or exist remotely on an NFS, HTTP, or FTP server. The tool can also be configured to run unattended and use the kickstart method to prepare the guest, allowing for easy automation of installation. This tool is included in the virt-install package. Important When using virt-install as a non-root user, you enter an unprivileged libvirt session . This means that the created guest will only be visible to you, and it will not have access to certain capabilities that guests created by the root have. Note For more information about using virt-install , see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . 4.1.3. virt-xml virt-xml is a command-line utility for editing domain XML files. For the XML configuration to be modified successfully, the name of the guest, the XML action, and the change to make must be included with the command. For example, the following lists the suboptions that relate to guest boot configuration, and then turns on the boot menu on the example_domain guest: Note that each invocation of the command can perform one action on one domain XML file. Note This tool is included in the virt-install package. For more information about using virt-xml , see the virt-xml man pages. 4.1.4. guestfish guestfish is a command-line utility for examining and modifying virtual machine disk images. It uses the libguestfs library and exposes all functionalities provided by the libguestfs API. Using guestfish The guestfish utility can be used in a standard command-line input mode, but also as an interactive shell. In shell mode, the guestfish command prefix is not needed, and the user is always registered as root. The following example uses the guestfish to display the file systems on the testguest virtual machine - first in standard mode, then in interactive mode. In addition, guestfish can be used in bash scripts for automation purposes. Important When using guestfish as a non-root user, you enter an unprivileged libvirt session . This means you cannot see or interact with disk images on guests created by the root. To gain read-only access to these disk images, use guestfish with the -ro -c qemu:///system options. In addition, you must have read privileges for the disk image files. Getting help with guestfish Like with all Linux bash commands, you can obtain help with guestfish by using the man guestfish command or the --help option. In addition, the guestfish help command can be used to view detailed information about a specific guestfish command. The following example displays information about the guestfish add command: Note For more information about guestfish , see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . | [
"virsh hostname localhost.localdomain USD virsh Welcome to virsh, the virtualization interactive terminal. Type: 'help' for help with commands 'quit' to quit virsh # hostname localhost.localdomain",
"virsh help domain Domain Management (help keyword 'domain'): attach-device attach device from an XML file attach-disk attach disk device [...] domname convert a domain id or UUID to domain name domrename rename a domain [...] virsh help domrename NAME domrename - rename a domain SYNOPSIS domrename <domain> <new-name> DESCRIPTION Rename an inactive domain. OPTIONS [--domain] <string> domain name, id or uuid [--new-name] <string> new domain name virsh domrename --domain Fontaine --new-name Atlas Domain successfully renamed",
"virt-xml boot=? --boot options: arch cdrom [...] menu network nvram nvram_template os_type smbios_mode uefi useserial virt-xml example_domain --edit --boot menu=on Domain 'example_domain' defined successfully.",
"guestfish domain testguest : run : list-filesystems /dev/sda1: xfs /dev/rhel/root: xfs /dev/rhel/swap: swap guestfish Welcome to guestfish, the guest filesystem shell for editing virtual machine filesystems and disk images. Type: 'help' for help on commands 'man' to read the manual 'quit' to quit the shell ><fs> domain testguest ><fs> run ><fs> list-filesystems /dev/sda1: xfs /dev/rhel/root: xfs /dev/rhel/swap: swap",
"guestfish help add NAME add-drive - add an image to examine or modify SYNOPSIS add-drive filename [readonly:true|false] [format:..] [iface:..] [name:..] [label:..] [protocol:..] [server:..] [username:..] [secret:..] [cachemode:..] [discard:..] [copyonread:true|false] DESCRIPTION This function adds a disk image called filename to the handle. filename may be a regular host file or a host device. [...]"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_getting_started_guide/chap-cli-intro |
A.2. UI Connection Problems | A.2. UI Connection Problems If negotiate authentication is not working, turn on verbose logging for the authentication process to help diagnose the issue: Close all browser windows. In a terminal, set the new log levels for Firefox: This enables verbose logging and logs all information to /tmp/moz.log . Restart the browser from the same terminal window. Some of the common error messages and workarounds are in Table A.1, "UI Error Log Messages" . Table A.1. UI Error Log Messages Error Log Message Description and Fix There are no Kerberos tickets. Run kinit . This can occur when you have successfully obtained Kerberos tickets but are still unable to authenticate to the UI. This indicates that there is a problem with the Kerberos configuration. The first place to check is the [domain_realm] section in the /etc/krb5.conf file. Make sure that the IdM Kerberos domain entry is correct and matches the configuration in the Firefox negotiation parameters. For example: Nothing is in the log file. It is possible that you are behind a proxy which is removing the HTTP headers required for negotiate authentication. Try to connect to the server using HTTPS instead, which allows the request to pass through unmodified. Then check the log file again. | [
"export NSPR_LOG_MODULES=negotiateauth:5 export NSPR_LOG_FILE=/tmp/moz.log",
"-1208550944[90039d0]: entering nsNegotiateAuth::GetNextToken() -1208550944[90039d0]: gss_init_sec_context() failed: Miscellaneous failure No credentials cache found",
"-1208994096[8d683d8]: entering nsAuthGSSAPI::GetNextToken() -1208994096[8d683d8]: gss_init_sec_context() failed: Miscellaneous failure Server not found in Kerberos database",
".example.com = EXAMPLE.COM example.com = EXAMPLE.COM"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/Troubleshooting-UI |
Appendix G. Branding | Appendix G. Branding G.1. Branding G.1.1. Re-Branding the Manager Various aspects of the Red Hat Virtualization Manager can be customized, such as the icons used by and text displayed in pop-up windows and the links shown on the Welcome Page. This allows you to re-brand the Manager and gives you fine-grained control over the end look and feel presented to administrators and users. The files required to customize the Manager are located in the /etc/ovirt-engine/branding/ directory on the system on which the Manager is installed. The files comprise a set of cascading style sheet files that are used to style various aspects of the graphical user interface and a set of properties files that contain messages and links that are incorporated into various components of the Manager. To customize a component, edit the file for that component and save the changes. The time you open or refresh that component, the changes will be applied. G.1.2. Login Screen The login screen is the login screen used by both the Administration Portal and VM Portal. The elements of the login screen that can be customized are as follows: The border The header image on the left The header image on the right The header text The classes for the login screen are located in common.css . G.1.3. Administration Portal Screen The administration portal screen is the main screen that is shown when you log into the Administration Portal. The elements of the administration portal screen that can be customized are as follows: The logo The left background image The center background image The right background image The text to the right of the logo The classes for the administration portal screen are located in web_admin.css . G.1.4. VM Portal Screen The VM Portal screen is the screen that is shown when you log into the VM Portal. The elements of the VM Portal screen that can be customized are as follows: The logo The center background image The right background image The border around the main grid The text above the Logged in user label The classes for the VM Portal screen are located in user_portal.css . G.1.5. Pop-Up Windows Pop-up windows are all windows in the Manager that allow you to create, edit or update an entity such as a host or virtual machine. The elements of pop-up windows that can be customized are as follows: The border The header image on the left The header center image (repeated) The classes for pop-up windows are located in common.css . G.1.6. Tabs Many pop-up windows in the Administration Portal include tabs. The elements of these tabs that can be customized are as follows: Active Inactive The classes for tabs are located in common.css and user_portal.css . G.1.7. The Welcome Page The Welcome Page is the page that is initially displayed when you visit the homepage of the Manager. In addition to customizing the overall look and feel, you can also make other changes such as adding links to the page for additional documentation or internal websites by editing a template file. The elements of the Welcome Page that can be customized are as follows: The page title The header (left, center and right) The error message The link to forward and the associated message for that link Add a message banner or preamble The classes for the Welcome Page are located in welcome_style.css . The Template File The template file for the Welcome Page is a regular HTML file of the name welcome_page.template that does not contain HTML , HEAD or BODY tags. This file is inserted directly into the Welcome Page itself, and acts as a container for the content that is displayed in the Welcome Page. As such, you must edit this file to add new links or change the content itself. Another feature of the template file is that it contains placeholder text such as {user_portal} that is replaced by corresponding text in the messages.properties file when the Welcome Page is processed. The Preamble You can add a custom message banner to the Welcome Page by adding a preamble.template containing the banner text and a preamble.css file defining the banner size, and linking them in the branding.properties file. Sample files are available at sample preamble template . Note In an engine upgrade, the custom message banner remains in place and will work without issue. Following engine backup and restore, during engine restore, the custom message banner needs to be manually restored and verified. G.1.8. The Page Not Found Page The Page Not Found page is a page that is displayed when you open a link to a page that cannot be found in the Red Hat Virtualization Manager. The elements of the Page Not Found page that can be customized are as follows: The page title The header (left, center and right) The error message The link to forward and the associated message for that link The classes for the Page Not Found page are located in welcome_style.css . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/appe-branding |
Chapter 9. Monitoring the Network Observability Operator | Chapter 9. Monitoring the Network Observability Operator You can use the web console to monitor alerts related to the health of the Network Observability Operator. 9.1. Health dashboards Metrics about health and resource usage of the Network Observability Operator are located in the Observe Dashboards page in the web console. You can view metrics about the health of the Operator in the following categories: Flows per second Sampling Errors last minute Dropped flows per second Flowlogs-pipeline statistics Flowlogs-pipleine statistics views eBPF agent statistics views Operator statistics Resource usage 9.2. Health alerts A health alert banner that directs you to the dashboard can appear on the Network Traffic and Home pages if an alert is triggered. Alerts are generated in the following cases: The NetObservLokiError alert occurs if the flowlogs-pipeline workload is dropping flows because of Loki errors, such as if the Loki ingestion rate limit has been reached. The NetObservNoFlows alert occurs if no flows are ingested for a certain amount of time. The NetObservFlowsDropped alert occurs if the Network Observability eBPF agent hashmap table is full, and the eBPF agent processes flows with degraded performance, or when the capacity limiter is triggered. 9.3. Viewing health information You can access metrics about health and resource usage of the Network Observability Operator from the Dashboards page in the web console. Prerequisites You have the Network Observability Operator installed. You have access to the cluster as a user with the cluster-admin role or with view permissions for all projects. Procedure From the Administrator perspective in the web console, navigate to Observe Dashboards . From the Dashboards dropdown, select Netobserv/Health . View the metrics about the health of the Operator that are displayed on the page. 9.3.1. Disabling health alerts You can opt out of health alerting by editing the FlowCollector resource: In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster then select the YAML tab. Add spec.processor.metrics.disableAlerts to disable health alerts, as in the following YAML sample: apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: metrics: disableAlerts: [NetObservLokiError, NetObservNoFlows] 1 1 You can specify one or a list with both types of alerts to disable. 9.4. Creating Loki rate limit alerts for the NetObserv dashboard You can create custom alerting rules for the Netobserv dashboard metrics to trigger alerts when Loki rate limits have been reached. Prerequisites You have access to the cluster as a user with the cluster-admin role or with view permissions for all projects. You have the Network Observability Operator installed. Procedure Create a YAML file by clicking the import icon, + . Add an alerting rule configuration to the YAML file. In the YAML sample that follows, an alert is created for when Loki rate limits have been reached: apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: loki-alerts namespace: openshift-monitoring spec: groups: - name: LokiRateLimitAlerts rules: - alert: LokiTenantRateLimit annotations: message: |- {{ USDlabels.job }} {{ USDlabels.route }} is experiencing 429 errors. summary: "At any number of requests are responded with the rate limit error code." expr: sum(irate(loki_request_duration_seconds_count{status_code="429"}[1m])) by (job, namespace, route) / sum(irate(loki_request_duration_seconds_count[1m])) by (job, namespace, route) * 100 > 0 for: 10s labels: severity: warning Click Create to apply the configuration file to the cluster. 9.5. Using the eBPF agent alert An alert, NetObservAgentFlowsDropped , is triggered when the Network Observability eBPF agent hashmap table is full or when the capacity limiter is triggered. If you see this alert, consider increasing the cacheMaxFlows in the FlowCollector , as shown in the following example. Note Increasing the cacheMaxFlows might increase the memory usage of the eBPF agent. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the Network Observability Operator , select Flow Collector . Select cluster , and then select the YAML tab. Increase the spec.agent.ebpf.cacheMaxFlows value, as shown in the following YAML sample: 1 Increase the cacheMaxFlows value from its value at the time of the NetObservAgentFlowsDropped alert. Additional resources For more information about creating alerts that you can see on the dashboard, see Creating alerting rules for user-defined projects . | [
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: metrics: disableAlerts: [NetObservLokiError, NetObservNoFlows] 1",
"apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: loki-alerts namespace: openshift-monitoring spec: groups: - name: LokiRateLimitAlerts rules: - alert: LokiTenantRateLimit annotations: message: |- {{ USDlabels.job }} {{ USDlabels.route }} is experiencing 429 errors. summary: \"At any number of requests are responded with the rate limit error code.\" expr: sum(irate(loki_request_duration_seconds_count{status_code=\"429\"}[1m])) by (job, namespace, route) / sum(irate(loki_request_duration_seconds_count[1m])) by (job, namespace, route) * 100 > 0 for: 10s labels: severity: warning",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: cacheMaxFlows: 200000 1"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/network_observability/network-observability-operator-monitoring |
11.5. The REST Interface Connector | 11.5. The REST Interface Connector The REST connector differs from the Hot Rod and Memcached connectors because it requires a web subsystem. Therefore configurations such as socket-binding, worker threads, timeouts, etc, must be performed on the web subsystem. The following enables a REST server. The following enables a REST server: See Section 11.6, "Using the REST Interface" for more information. Report a bug 11.5.1. Configure REST Connectors Use the following procedure to configure the rest-connector element in Red Hat JBoss Data Grid's Remote Client-Server mode. Procedure 11.1. Configuring REST Connectors for Remote Client-Server Mode The rest-connector element specifies the configuration information for the REST connector. The virtual-server parameter specifies the virtual server used by the REST connector. The default value for this parameter is default-host . This is an optional parameter. The cache-container parameter names the cache container used by the REST connector. This is a mandatory parameter. The context-path parameter specifies the context path for the REST connector. The default value for this parameter is an empty string ( "" ). This is an optional parameter. The security-domain parameter specifies that the specified domain, declared in the security subsystem, should be used to authenticate access to the REST endpoint. This is an optional parameter. If this parameter is omitted, no authentication is performed. The auth-method parameter specifies the method used to retrieve credentials for the end point. The default value for this parameter is BASIC . Supported alternate values include BASIC , DIGEST , and CLIENT-CERT . This is an optional parameter. The security-mode parameter specifies whether authentication is required only for write operations (such as PUT, POST and DELETE) or for read operations (such as GET and HEAD) as well. Valid values for this parameter are WRITE for authenticating write operations only, or READ_WRITE to authenticate read and write operations. The default value for this parameter is READ_WRITE . Report a bug | [
"<rest-connector virtual-server=\"default-host\" cache-container=\"local\" security-domain=\"other\" auth-method=\"BASIC\"/>",
"<subsystem xmlns=\"urn:infinispan:server:endpoint:6.1\"> <rest-connector virtual-server=\"default-host\" cache-container=\"local\" context-path=\"USD{CONTEXT_PATH}\" security-domain=\"USD{SECURITY_DOMAIN}\" auth-method=\"USD{METHOD}\" security-mode=\"USD{MODE}\" /> </subsystem>"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-the_rest_interface_connector |
8.214. system-config-keyboard | 8.214. system-config-keyboard 8.214.1. RHBA-2013:0940 - system-config-keyboard bug fix update Updated system-config-keyboard packages that fix one bug are now available for Red Hat Enterprise Linux 6. The system-config-keyboard packages provide a graphical user interface that allows the user to change the default keyboard of the system. Bug Fix BZ# 952125 The system-config-keyboard packages contain a plug-in for firstboot. versions of system-config-keyboard depended on firstboot, so it was not possible to install the packages without pulling in firstboot too. This erroneous dependency has been removed and the system-config-keyboard packages can now be installed without pulling in firstboot. Users of system-config-keyboard are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/system-config-keyboard |
Chapter 6. Completing post customization tasks | Chapter 6. Completing post customization tasks To complete the customizations made, perform the following tasks: Create a product.img image file (applies only for graphical customizations). Create a custom boot image. This section provides information about how to create a product.img image file and to create a custom boot image. 6.1. Creating a product.img file A product.img image file is an archive containing new installer files that replace the existing ones at runtime. During a system boot, Anaconda loads the product.img file from the images/ directory on the boot media. It then uses the files that are present in this directory to replace identically named files in the installer's file system. The files when replaced customizes the installer (for example, for replacing default images with custom ones). Note: The product.img image must contain a directory structure identical to the installer. For more information about the installer directory structure, see the table below. Table 6.1. Installer directory structure and custom contents Type of custom content File system location Pixmaps (logo, sidebar, top bar, and so on.) /usr/share/anaconda/pixmaps/ GUI stylesheet /usr/share/anaconda/anaconda-gtk.css Anaconda add-ons /usr/share/anaconda/addons/ Product configuration files /etc/anaconda/product.d/ Custom configuration files /etc/anaconda/conf.d/ Anaconda DBus service conf files /usr/share/anaconda/dbus/confs/ Anaconda DBus service files /usr/share/anaconda/dbus/services/ The procedure below explains how to create a product.img file. Procedure Navigate to a working directory such as /tmp , and create a subdirectory named product/ : Create a subdirectory product/ Create a directory structure identical to the location of the file you want to replace. For example, if you want to test an add-on that is present in the /usr/share/anaconda/addons directory on the installation system, create the same structure in your working directory: Note To view the installer's runtime file, boot the installation and switch to virtual console 1 ( Ctrl + Alt + F1 ) and then switch to the second tmux window ( Ctrl + b + 2 ). A shell prompt that can be used to browse a file system opens. Place your customized files (in this example, custom add-on for Anaconda ) into the newly created directory: Repeat steps 3 and 4 (create a directory structure and place the custom files into it) for every file you want to add to the installer. Create a .buildstamp file in the root of the directory. The .buildstamp file describes the system version, the product and several other parameters. The following is an example of a .buildstamp file from Red Hat Enterprise Linux 8.4: The IsFinal parameter specifies whether the image is for a release (GA) version of the product ( True ), or a pre-release such as Alpha, Beta, or an internal milestone ( False ). Navigate to the product/ directory, and create the product.img archive: This creates a product.img file one level above the product/ directory. Move the product.img file to the images/ directory of the extracted ISO image. The product.img file is now created and the customizations that you want to make are placed in the respective directories. Note Instead of adding the product.img file on the boot media, you can place this file into a different location and use the inst.updates= boot option at the boot menu to load it. In that case, the image file can have any name, and it can be placed in any location (USB flash drive, hard disk, HTTP, FTP or NFS server), as long as this location is reachable from the installation system. See the Anaconda Boot Options for more information about Anaconda boot options. 6.2. Creating custom boot images After you customize the boot images and the GUI layout, create a new image that includes the changes you made. To create custom boot images, follow the procedure below. Procedure Make sure that all of your changes are included in the working directory. For example, if you are testing an add-on, make sure to place the product.img in the images/ directory. Make sure your current working directory is the top-level directory of the extracted ISO image, for example, /tmp/ISO/iso/ . Create a new ISO image using the genisoimage : In the above example: Make sure that the values for -V , -volset , and -A options match the image's boot loader configuration, if you are using the LABEL= directive for options that require a location to load a file on the same disk. If your boot loader configuration ( isolinux/isolinux.cfg for BIOS and EFI/BOOT/grub.cfg for UEFI) uses the inst.stage2=LABEL= disk_label stanza to load the second stage of the installer from the same disk, then the disk labels must match. Important In boot loader configuration files, replace all spaces in disk labels with \x20 . For example, if you create an ISO image with a RHEL 8.0 label, boot loader configuration should use RHEL\x208.0 . Replace the value of the -o option ( -o ../NEWISO.iso ) with the file name of your new image. The value in the example creates the NEWISO.iso file in the directory above the current one. For more information about this command, see the genisoimage(1) man page on your system. Implant an MD5 checksum into the image. Note that without an MD5 checksu, the image verification check might fail (the rd.live.check option in the boot loader configuration) and the installation can hang. In the above example, replace ../NEWISO.iso with the file name and the location of the ISO image that you have created in the step. You can now write the new ISO image to physical media or a network server to boot it on physical hardware, or you can use it to start installing a virtual machine. Additional resources For instructions on preparing boot media or network server, see Advanced installation boot options . For instructions on creating virtual machines with ISO images, see Configuring and Managing Virtualization . | [
"cd /tmp",
"mkdir product/",
"mkdir -p product/usr/share/anaconda/addons",
"cp -r ~/path/to/custom/addon/ product/usr/share/anaconda/addons/",
"[Main] Product=Red Hat Enterprise Linux Version=8.4 BugURL=https://bugzilla.redhat.com/ IsFinal=True UUID=202007011344.x86_64 [Compose] Lorax=28.14.49-1",
"cd product",
"find . | cpio -c -o | gzip -9cv > ../product.img",
"genisoimage -U -r -v -T -J -joliet-long -V \"RHEL-8 Server.x86_64\" -volset \"RHEL-8 Server.x86_64\" -A \"RHEL-8 Server.x86_64\" -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -eltorito-alt-boot -e images/efiboot.img -no-emul-boot -o ../NEWISO.iso .",
"implantisomd5 ../NEWISO.iso"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/customizing_anaconda/completing-post-customization-tasks_customizing-anaconda |
Chapter 2. Installing Red Hat Enterprise Linux Virtual Machines | Chapter 2. Installing Red Hat Enterprise Linux Virtual Machines Installing a Red Hat Enterprise Linux virtual machine involves the following key steps: Create a virtual machine. You must add a virtual disk for storage, and a network interface to connect the virtual machine to the network. Start the virtual machine and install an operating system. See your operating system's documentation for instructions. Red Hat Enterprise Linux 6: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/Installation_Guide/index.html Red Hat Enterprise Linux 7: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/Installation_Guide/index.html Red Hat Enterprise Linux Atomic Host 7: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/installation_and_configuration_guide Red Hat Enterprise Linux 8: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/performing_a_standard_rhel_installation/index Enable the required repositories for your operating system. Install guest agents and drivers for additional virtual machine functionality. 2.1. Creating a Virtual Machine Create a new virtual machine and configure the required settings. Procedure Click Compute Virtual Machines . Click New to open the New Virtual Machine window. Select an Operating System from the drop-down list. Enter a Name for the virtual machine. Add storage to the virtual machine. Attach or Create a virtual disk under Instance Images . Click Attach and select an existing virtual disk. Click Create and enter a Size(GB) and Alias for a new virtual disk. You can accept the default settings for all other fields, or change them if required. See Section A.4, "Explanation of Settings in the New Virtual Disk and Edit Virtual Disk Windows" for more details on the fields for all disk types. Connect the virtual machine to the network. Add a network interface by selecting a vNIC profile from the nic1 drop-down list at the bottom of the General tab. Specify the virtual machine's Memory Size on the System tab. Choose the First Device that the virtual machine will boot from on the Boot Options tab. You can accept the default settings for all other fields, or change them if required. For more details on all fields in the New Virtual Machine window, see Section A.1, "Explanation of Settings in the New Virtual Machine and Edit Virtual Machine Windows" . Click OK . The new virtual machine is created and displays in the list of virtual machines with a status of Down . Before you can use this virtual machine, you must install an operating system and register with the Content Delivery Network. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/installing_red_hat_enterprise_linux_virtual_machines |
Appendix B. Using Red Hat Maven repositories | Appendix B. Using Red Hat Maven repositories This section describes how to use Red Hat-provided Maven repositories in your software. B.1. Using the online repository Red Hat maintains a central Maven repository for use with your Maven-based projects. For more information, see the repository welcome page . There are two ways to configure Maven to use the Red Hat repository: Add the repository to your Maven settings Add the repository to your POM file Adding the repository to your Maven settings This method of configuration applies to all Maven projects owned by your user, as long as your POM file does not override the repository configuration and the included profile is enabled. Procedure Locate the Maven settings.xml file. It is usually inside the .m2 directory in the user home directory. If the file does not exist, use a text editor to create it. On Linux or UNIX: /home/ <username> /.m2/settings.xml On Windows: C:\Users\<username>\.m2\settings.xml Add a new profile containing the Red Hat repository to the profiles element of the settings.xml file, as in the following example: Example: A Maven settings.xml file containing the Red Hat repository <settings> <profiles> <profile> <id>red-hat</id> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>red-hat</activeProfile> </activeProfiles> </settings> For more information about Maven configuration, see the Maven settings reference . Adding the repository to your POM file To configure a repository directly in your project, add a new entry to the repositories element of your POM file, as in the following example: Example: A Maven pom.xml file containing the Red Hat repository <project> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>example-app</artifactId> <version>1.0.0</version> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> </project> For more information about POM file configuration, see the Maven POM reference . B.2. Using a local repository Red Hat provides file-based Maven repositories for some of its components. These are delivered as downloadable archives that you can extract to your local filesystem. To configure Maven to use a locally extracted repository, apply the following XML in your Maven settings or POM file: <repository> <id>red-hat-local</id> <url> USD{repository-url} </url> </repository> USD{repository-url} must be a file URL containing the local filesystem path of the extracted repository. Table B.1. Example URLs for local Maven repositories Operating system Filesystem path URL Linux or UNIX /home/alice/maven-repository file:/home/alice/maven-repository Windows C:\repos\red-hat file:C:\repos\red-hat | [
"/home/ <username> /.m2/settings.xml",
"C:\\Users\\<username>\\.m2\\settings.xml",
"<settings> <profiles> <profile> <id>red-hat</id> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>red-hat</activeProfile> </activeProfiles> </settings>",
"<project> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>example-app</artifactId> <version>1.0.0</version> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> </project>",
"<repository> <id>red-hat-local</id> <url> USD{repository-url} </url> </repository>"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_spring_boot_starter/using_red_hat_maven_repositories |
Chapter 28. AWS Kinesis Firehose Component | Chapter 28. AWS Kinesis Firehose Component Available as of Camel version 2.19 The Kinesis Firehose component supports sending messages to Amazon Kinesis Firehose service. Prerequisites You must have a valid Amazon Web Services developer account, and be signed up to use Amazon Kinesis Firehose. More information are available at AWS Kinesis Firehose 28.1. URI Format aws-kinesis-firehose://delivery-stream-name[?options] The stream needs to be created prior to it being used. You can append query options to the URI in the following format, ?options=value&option2=value&... 28.2. URI Options The AWS Kinesis Firehose component supports 5 options, which are listed below. Name Description Default Type configuration (advanced) The AWS Kinesis Firehose default configuration KinesisFirehose Configuration accessKey (producer) Amazon AWS Access Key String secretKey (producer) Amazon AWS Secret Key String region (producer) Amazon AWS Region String resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The AWS Kinesis Firehose endpoint is configured using URI syntax: with the following path and query parameters: 28.2.1. Path Parameters (1 parameters): Name Description Default Type streamName Required Name of the stream String 28.2.2. Query Parameters (7 parameters): Name Description Default Type amazonKinesisFirehoseClient (producer) Amazon Kinesis Firehose client to use for all requests for this endpoint AmazonKinesisFirehose proxyHost (producer) To define a proxy host when instantiating the DDBStreams client String proxyPort (producer) To define a proxy port when instantiating the DDBStreams client Integer region (producer) The region in which Kinesis client needs to work String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean accessKey (security) Amazon AWS Access Key String secretKey (security) Amazon AWS Secret Key String 28.3. Spring Boot Auto-Configuration The component supports 12 options, which are listed below. Name Description Default Type camel.component.aws-kinesis-firehose.access-key Amazon AWS Access Key String camel.component.aws-kinesis-firehose.configuration.access-key Amazon AWS Access Key String camel.component.aws-kinesis-firehose.configuration.amazon-kinesis-firehose-client Amazon Kinesis Firehose client to use for all requests for this endpoint AmazonKinesisFirehose camel.component.aws-kinesis-firehose.configuration.proxy-host To define a proxy host when instantiating the DDBStreams client String camel.component.aws-kinesis-firehose.configuration.proxy-port To define a proxy port when instantiating the DDBStreams client Integer camel.component.aws-kinesis-firehose.configuration.region The region in which Kinesis client needs to work String camel.component.aws-kinesis-firehose.configuration.secret-key Amazon AWS Secret Key String camel.component.aws-kinesis-firehose.configuration.stream-name Name of the stream String camel.component.aws-kinesis-firehose.enabled Enable aws-kinesis-firehose component true Boolean camel.component.aws-kinesis-firehose.region Amazon AWS Region String camel.component.aws-kinesis-firehose.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.aws-kinesis-firehose.secret-key Amazon AWS Secret Key String Required Kinesis Firehose component options You have to provide the amazonKinesisClient in the Registry with proxies and relevant credentials configured. 28.4. Usage 28.4.1. Amazon Kinesis Firehose configuration You will need to create an instance of AmazonKinesisClient and bind it to the registry ClientConfiguration clientConfiguration = new ClientConfiguration(); clientConfiguration.setProxyHost("http://myProxyHost"); clientConfiguration.setProxyPort(8080); Region region = Region.getRegion(Regions.fromName(region)); region.createClient(AmazonKinesisClient.class, null, clientConfiguration); // the 'null' here is the AWSCredentialsProvider which defaults to an instance of DefaultAWSCredentialsProviderChain registry.bind("kinesisFirehoseClient", client); You then have to reference the AmazonKinesisFirehoseClient in the amazonKinesisFirehoseClient URI option. from("aws-kinesis-firehose://mykinesisdeliverystream?amazonKinesisFirehoseClient=#kinesisClient") .to("log:out?showAll=true"); 28.4.2. Providing AWS Credentials It is recommended that the credentials are obtained by using the DefaultAWSCredentialsProviderChain that is the default when creating a new ClientConfiguration instance, however, a different AWSCredentialsProvider can be specified when calling createClient(... ). 28.4.3. Message headers set by the Kinesis producer on successful storage of a Record Header Type Description CamelAwsKinesisFirehoseRecordId String The record ID, as defined in Response Syntax 28.5. Dependencies Maven users will need to add the following dependency to their pom.xml. pom.xml <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws</artifactId> <version>USD{camel-version}</version> </dependency> where USD{camel-version } must be replaced by the actual version of Camel (2.19 or higher). 28.6. See Also Configuring Camel Component Endpoint Getting Started AWS Component | [
"aws-kinesis-firehose://delivery-stream-name[?options]",
"aws-kinesis-firehose:streamName",
"ClientConfiguration clientConfiguration = new ClientConfiguration(); clientConfiguration.setProxyHost(\"http://myProxyHost\"); clientConfiguration.setProxyPort(8080); Region region = Region.getRegion(Regions.fromName(region)); region.createClient(AmazonKinesisClient.class, null, clientConfiguration); // the 'null' here is the AWSCredentialsProvider which defaults to an instance of DefaultAWSCredentialsProviderChain registry.bind(\"kinesisFirehoseClient\", client);",
"from(\"aws-kinesis-firehose://mykinesisdeliverystream?amazonKinesisFirehoseClient=#kinesisClient\") .to(\"log:out?showAll=true\");",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws</artifactId> <version>USD{camel-version}</version> </dependency>"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/aws-kinesis-firehose-component |
Logging | Logging OpenShift Container Platform 4.7 OpenShift Logging installation, usage, and release notes Red Hat OpenShift Documentation Team | [
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" 1 namespace: \"openshift-logging\" spec: managementState: \"Managed\" 2 logStore: type: \"elasticsearch\" 3 retentionPolicy: 4 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 5 storage: storageClassName: \"<storage_class_name>\" 6 size: 200G resources: 7 limits: memory: \"16Gi\" requests: memory: \"16Gi\" proxy: 8 resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" 9 kibana: replicas: 1 collection: logs: type: \"fluentd\" 10 fluentd: {}",
"oc get deployment",
"cluster-logging-operator 1/1 1 1 18h elasticsearch-cd-x6kdekli-1 0/1 1 0 6m54s elasticsearch-cdm-x6kdekli-1 1/1 1 1 18h elasticsearch-cdm-x6kdekli-2 0/1 1 0 6m49s elasticsearch-cdm-x6kdekli-3 0/1 1 0 6m44s",
"apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc create -f <file-name>.yaml",
"oc create -f eo-namespace.yaml",
"apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\"",
"oc create -f <file-name>.yaml",
"oc create -f olo-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat 1 spec: {}",
"oc create -f <file-name>.yaml",
"oc create -f eo-og.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: \"elasticsearch-operator\" namespace: \"openshift-operators-redhat\" 1 spec: channel: \"stable-5.1\" 2 installPlanApproval: \"Automatic\" source: \"redhat-operators\" 3 sourceNamespace: \"openshift-marketplace\" name: \"elasticsearch-operator\"",
"oc create -f <file-name>.yaml",
"oc create -f eo-sub.yaml",
"oc get csv --all-namespaces",
"NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded kube-node-lease elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded kube-public elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded kube-system elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded openshift-apiserver-operator elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded openshift-apiserver elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded openshift-authentication-operator elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded openshift-authentication elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging 2",
"oc create -f <file-name>.yaml",
"oc create -f olo-og.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: \"stable\" 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc create -f <file-name>.yaml",
"oc create -f olo-sub.yaml",
"oc get csv -n openshift-logging",
"NAMESPACE NAME DISPLAY VERSION REPLACES PHASE openshift-logging clusterlogging.5.1.0-202007012112.p0 OpenShift Logging 5.1.0-202007012112.p0 Succeeded",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" 1 namespace: \"openshift-logging\" spec: managementState: \"Managed\" 2 logStore: type: \"elasticsearch\" 3 retentionPolicy: 4 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 5 storage: storageClassName: \"<storage-class-name>\" 6 size: 200G resources: 7 limits: memory: \"16Gi\" requests: memory: \"16Gi\" proxy: 8 resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" 9 kibana: replicas: 1 collection: logs: type: \"fluentd\" 10 fluentd: {}",
"oc get deployment",
"cluster-logging-operator 1/1 1 1 18h elasticsearch-cd-x6kdekli-1 1/1 1 0 6m54s elasticsearch-cdm-x6kdekli-1 1/1 1 1 18h elasticsearch-cdm-x6kdekli-2 1/1 1 0 6m49s elasticsearch-cdm-x6kdekli-3 1/1 1 0 6m44s",
"oc create -f <file-name>.yaml",
"oc create -f olo-instance.yaml",
"oc get pods -n openshift-logging",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s fluentd-587vb 1/1 Running 0 2m26s fluentd-7mpb9 1/1 Running 0 2m30s fluentd-flm6j 1/1 Running 0 2m33s fluentd-gn4rn 1/1 Running 0 2m26s fluentd-nlgb6 1/1 Running 0 2m30s fluentd-snpkt 1/1 Running 0 2m28s kibana-d6d5668c5-rppqm 2/2 Running 0 2m39s",
"oc auth can-i get pods/log -n <project>",
"yes",
"oc adm pod-network join-projects --to=openshift-operators-redhat openshift-logging",
"oc label namespace openshift-operators-redhat project=openshift-operators-redhat",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring-ingress-operators-redhat spec: ingress: - from: - podSelector: {} - from: - namespaceSelector: matchLabels: project: \"openshift-operators-redhat\" - from: - namespaceSelector: matchLabels: name: \"openshift-monitoring\" - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" 1 namespace: \"openshift-logging\" 2 spec: managementState: \"Managed\" 3 logStore: type: \"elasticsearch\" 4 retentionPolicy: application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 resources: limits: memory: 16Gi requests: cpu: 500m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: 5 type: \"kibana\" kibana: resources: limits: memory: 736Mi requests: cpu: 100m memory: 736Mi replicas: 1 collection: 6 logs: type: \"fluentd\" fluentd: resources: limits: memory: 736Mi requests: cpu: 100m memory: 736Mi",
"oc get pods --selector component=fluentd -o wide -n openshift-logging",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES fluentd-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> fluentd-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> fluentd-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> fluentd-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> fluentd-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none>",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: collection: logs: fluentd: resources: limits: 1 memory: 736Mi requests: cpu: 100m memory: 736Mi",
"oc edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: forwarder: fluentd: buffer: chunkLimitSize: 8m 1 flushInterval: 5s 2 flushMode: interval 3 flushThreadCount: 3 4 overflowAction: throw_exception 5 retryMaxInterval: \"300s\" 6 retryType: periodic 7 retryWait: 1s 8 totalLimitSize: 32m 9",
"oc get pods -n openshift-logging",
"oc extract configmap/fluentd --confirm",
"<buffer> @type file path '/var/lib/fluentd/default' flush_mode interval flush_interval 5s flush_thread_count 3 retry_type periodic retry_wait 1s retry_max_interval 300s retry_timeout 60m queued_chunks_limit_size \"#{ENV['BUFFER_QUEUE_LIMIT'] || '32'}\" total_limit_size 32m chunk_limit_size 8m overflow_action throw_exception </buffer>",
"outputRefs: - default",
"oc edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" collection: logs: type: \"fluentd\" fluentd: {}",
"oc get pods -n openshift-logging",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: 1 - name: all-to-default inputRefs: - infrastructure - application - audit outputRefs: - default",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch-insecure type: \"elasticsearch\" url: http://elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: elasticsearch-secure type: \"elasticsearch\" url: https://elasticsearch-secure.messaging.svc.cluster.local secret: name: es-audit - name: secureforward-offcluster type: \"fluentdForward\" url: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputRefs: - application outputRefs: - secureforward-offcluster - name: infra-logs inputRefs: - infrastructure outputRefs: - elasticsearch-insecure - name: audit-logs inputRefs: - audit outputRefs: - elasticsearch-secure - default 1",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3",
"apiVersion: \"logging.openshift.io/v1\" kind: \"Elasticsearch\" metadata: name: \"elasticsearch\" spec: indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4",
"oc get cronjob",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s",
"oc edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: 1 resources: limits: 2 memory: \"32Gi\" requests: 3 cpu: \"1\" memory: \"16Gi\" proxy: 4 resources: limits: memory: 100Mi requests: memory: 100Mi",
"resources: limits: 1 memory: \"32Gi\" requests: 2 cpu: \"8\" memory: \"32Gi\"",
"oc edit clusterlogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: redundancyPolicy: \"SingleRedundancy\" 1",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"gp2\" size: \"200G\"",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}",
"oc project openshift-logging",
"oc get pods | grep elasticsearch-",
"oc -n openshift-logging patch daemonset/logging-fluentd -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-fluentd\": \"false\"}}}}}'",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST",
"oc exec -c elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST",
"{\"_shards\":{\"total\":4,\"successful\":4,\"failed\":0},\".security\":{\"total\":2,\"successful\":2,\"failed\":0},\".kibana_1\":{\"total\":2,\"successful\":2,\"failed\":0}}",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'",
"{\"acknowledged\":true,\"persistent\":{\"cluster\":{\"routing\":{\"allocation\":{\"enable\":\"primaries\"}}}},\"transient\":",
"oc rollout resume deployment/<deployment-name>",
"oc rollout resume deployment/elasticsearch-cdm-0-1",
"deployment.extensions/elasticsearch-cdm-0-1 resumed",
"oc get pods | grep elasticsearch-",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h",
"oc rollout pause deployment/<deployment-name>",
"oc rollout pause deployment/elasticsearch-cdm-0-1",
"deployment.extensions/elasticsearch-cdm-0-1 paused",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/health?pretty=true",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/health?pretty=true",
"{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"yellow\", 1 \"timed_out\" : false, \"number_of_nodes\" : 3, \"number_of_data_nodes\" : 3, \"active_primary_shards\" : 8, \"active_shards\" : 16, \"relocating_shards\" : 0, \"initializing_shards\" : 0, \"unassigned_shards\" : 1, \"delayed_unassigned_shards\" : 0, \"number_of_pending_tasks\" : 0, \"number_of_in_flight_fetch\" : 0, \"task_max_waiting_in_queue_millis\" : 0, \"active_shards_percent_as_number\" : 100.0 }",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'",
"{ \"acknowledged\" : true, \"persistent\" : { }, \"transient\" : { \"cluster\" : { \"routing\" : { \"allocation\" : { \"enable\" : \"all\" } } } } }",
"oc -n openshift-logging patch daemonset/logging-fluentd -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-fluentd\": \"true\"}}}}}'",
"oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging",
"172.30.183.229",
"oc get service elasticsearch -n openshift-logging",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h",
"oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://172.30.183.229:9200/_cat/health\"",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108",
"oc project openshift-logging",
"oc extract secret/elasticsearch --to=. --keys=admin-ca",
"admin-ca",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1",
"cat ./admin-ca | sed -e \"s/^/ /\" >> <file-name>.yaml",
"oc create -f <file-name>.yaml",
"route.route.openshift.io/elasticsearch created",
"token=USD(oc whoami -t)",
"routeES=`oc get route elasticsearch -o jsonpath={.spec.host}`",
"curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://USD{routeES}\"",
"{ \"name\" : \"elasticsearch-cdm-i40ktba0-1\", \"cluster_name\" : \"elasticsearch\", \"cluster_uuid\" : \"0eY-tJzcR3KOdpgeMJo-MQ\", \"version\" : { \"number\" : \"6.8.1\", \"build_flavor\" : \"oss\", \"build_type\" : \"zip\", \"build_hash\" : \"Unknown\", \"build_date\" : \"Unknown\", \"build_snapshot\" : true, \"lucene_version\" : \"7.7.0\", \"minimum_wire_compatibility_version\" : \"5.6.0\", \"minimum_index_compatibility_version\" : \"5.0.0\" }, \"<tagline>\" : \"<for search>\" }",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: logs: type: \"fluentd\" fluentd: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi",
"oc edit ClusterLogging instance",
"oc edit ClusterLogging instance apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: visualization: type: \"kibana\" kibana: replicas: 1 1",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: logs: type: \"fluentd\" fluentd: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 tolerations: 1 - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: {} redundancyPolicy: \"ZeroRedundancy\" visualization: type: \"kibana\" kibana: tolerations: 2 - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi replicas: 1 collection: logs: type: \"fluentd\" fluentd: tolerations: 3 - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi",
"tolerations: - effect: \"NoExecute\" key: \"node.kubernetes.io/disk-pressure\" operator: \"Exists\"",
"oc adm taint nodes <node-name> <key>=<value>:<effect>",
"oc adm taint nodes node1 elasticsearch=node:NoExecute",
"logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 1 tolerations: - key: \"elasticsearch\" 1 operator: \"Exists\" 2 effect: \"NoExecute\" 3 tolerationSeconds: 6000 4",
"oc adm taint nodes <node-name> <key>=<value>:<effect>",
"oc adm taint nodes node1 kibana=node:NoExecute",
"visualization: type: \"kibana\" kibana: tolerations: - key: \"kibana\" 1 operator: \"Exists\" 2 effect: \"NoExecute\" 3 tolerationSeconds: 6000 4",
"tolerations: - key: \"node-role.kubernetes.io/master\" operator: \"Exists\" effect: \"NoExecute\"",
"oc adm taint nodes <node-name> <key>=<value>:<effect>",
"oc adm taint nodes node1 collector=node:NoExecute",
"collection: logs: type: \"fluentd\" fluentd: tolerations: - key: \"collector\" 1 operator: \"Exists\" 2 effect: \"NoExecute\" 3 tolerationSeconds: 6000 4",
"oc edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana",
"oc get pod kibana-5b8bdf44f9-ccpq9 -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.20.0",
"oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml",
"kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: ''",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana",
"oc get pods",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s",
"oc get pod kibana-7d85dcffc8-bfpfp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>",
"oc get pods",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s",
"Compress=yes 1 ForwardToConsole=no 2 ForwardToSyslog=no MaxRetentionSec=1month 3 RateLimitBurst=10000 4 RateLimitIntervalSec=30s Storage=persistent 5 SyncIntervalSec=1s 6 SystemMaxUse=8G 7 SystemKeepFree=20% 8 SystemMaxFileSize=10M 9",
"export jrnl_cnf=USD( cat journald.conf | base64 -w0 )",
"cat << EOF > ./40-worker-custom-journald.yaml 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 2 name: 40-worker-custom-journald 3 spec: config: ignition: config: {} security: tls: {} timeouts: {} version: 3.2.0 networkd: {} passwd: {} storage: files: - contents: source: data:text/plain;charset=utf-8;base64,USD{jrnl_cnf} 4 verification: {} filesystem: root mode: 0644 5 path: /etc/systemd/journald.conf.d/custom.conf osImageURL: \"\" EOF",
"oc apply -f <file_name>.yaml",
"oc describe machineconfigpool/<node> 1",
"Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Conditions: Message: Reason: All nodes are updating to rendered-worker-913514517bcea7c93bd446f4830bc64e",
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.",
"oc logs -f <pod_name> -c <container_name>",
"oc logs ruby-58cd97df55-mww7r",
"oc logs -f ruby-57f7f4855b-znl92 -c ruby",
"oc logs <object_type>/<resource_name> 1",
"oc logs deployment/ruby",
"oc auth can-i get pods/log -n <project>",
"yes",
"oc auth can-i get pods/log -n <project>",
"yes",
"{ \"_index\": \"infra-000001\", \"_type\": \"_doc\", \"_id\": \"YmJmYTBlNDkZTRmLTliMGQtMjE3NmFiOGUyOWM3\", \"_version\": 1, \"_score\": null, \"_source\": { \"docker\": { \"container_id\": \"f85fa55bbef7bb783f041066be1e7c267a6b88c4603dfce213e32c1\" }, \"kubernetes\": { \"container_name\": \"registry-server\", \"namespace_name\": \"openshift-marketplace\", \"pod_name\": \"redhat-marketplace-n64gc\", \"container_image\": \"registry.redhat.io/redhat/redhat-marketplace-index:v4.7\", \"container_image_id\": \"registry.redhat.io/redhat/redhat-marketplace-index@sha256:65fc0c45aabb95809e376feb065771ecda9e5e59cc8b3024c4545c168f\", \"pod_id\": \"8f594ea2-c866-4b5c-a1c8-a50756704b2a\", \"host\": \"ip-10-0-182-28.us-east-2.compute.internal\", \"master_url\": \"https://kubernetes.default.svc\", \"namespace_id\": \"3abab127-7669-4eb3-b9ef-44c04ad68d38\", \"namespace_labels\": { \"openshift_io/cluster-monitoring\": \"true\" }, \"flat_labels\": [ \"catalogsource_operators_coreos_com/update=redhat-marketplace\" ] }, \"message\": \"time=\\\"2020-09-23T20:47:03Z\\\" level=info msg=\\\"serving registry\\\" database=/database/index.db port=50051\", \"level\": \"unknown\", \"hostname\": \"ip-10-0-182-28.internal\", \"pipeline_metadata\": { \"collector\": { \"ipaddr4\": \"10.0.182.28\", \"inputname\": \"fluent-plugin-systemd\", \"name\": \"fluentd\", \"received_at\": \"2020-09-23T20:47:15.007583+00:00\", \"version\": \"1.7.4 1.6.0\" } }, \"@timestamp\": \"2020-09-23T20:47:03.422465+00:00\", \"viaq_msg_id\": \"YmJmYTBlNDktMDMGQtMjE3NmFiOGUyOWM3\", \"openshift\": { \"labels\": { \"logging\": \"infra\" } } }, \"fields\": { \"@timestamp\": [ \"2020-09-23T20:47:03.422Z\" ], \"pipeline_metadata.collector.received_at\": [ \"2020-09-23T20:47:15.007Z\" ] }, \"sort\": [ 1600894023422 ] }",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: elasticsearch-secure 3 type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 secret: name: elasticsearch - name: elasticsearch-insecure 4 type: \"elasticsearch\" url: http://elasticsearch.insecure.com:9200 - name: kafka-app 5 type: \"kafka\" url: tls://kafka.secure.com:9093/app-topic inputs: 6 - name: my-app-logs application: namespaces: - my-project pipelines: - name: audit-logs 7 inputRefs: - audit outputRefs: - elasticsearch-secure - default parse: json 8 labels: secure: \"true\" 9 datacenter: \"east\" - name: infrastructure-logs 10 inputRefs: - infrastructure outputRefs: - elasticsearch-insecure labels: datacenter: \"west\" - name: my-app 11 inputRefs: - my-app-logs outputRefs: - default - inputRefs: 12 - application outputRefs: - kafka-app labels: datacenter: \"south\"",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: elasticsearch-insecure 3 type: \"elasticsearch\" 4 url: http://elasticsearch.insecure.com:9200 5 - name: elasticsearch-secure type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 secret: name: es-secret 6 pipelines: - name: application-logs 7 inputRefs: 8 - application - audit outputRefs: - elasticsearch-secure 9 - default 10 parse: json 11 labels: myLabel: \"myValue\" 12 - name: infrastructure-audit-logs 13 inputRefs: - infrastructure outputRefs: - elasticsearch-insecure labels: logs: \"audit-infra\"",
"oc create -f <file-name>.yaml",
"oc delete pod --selector logging-infra=fluentd",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' pipelines: - name: forward-to-fluentd-secure 7 inputRefs: 8 - application - audit outputRefs: - fluentd-server-secure 9 - default 10 parse: json 11 labels: clusterId: \"C1234\" 12 - name: forward-to-fluentd-insecure 13 inputRefs: - infrastructure outputRefs: - fluentd-server-insecure labels: clusterId: \"C1234\"",
"oc create -f <file-name>.yaml",
"oc delete pod --selector logging-infra=fluentd",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: rsyslog-east 3 type: syslog 4 syslog: 5 facility: local0 rfc: RFC3164 payloadKey: message severity: informational url: 'tls://rsyslogserver.east.example.com:514' 6 secret: 7 name: syslog-secret - name: rsyslog-west type: syslog syslog: appName: myapp facility: user msgID: mymsg procID: myproc rfc: RFC5424 severity: debug url: 'udp://rsyslogserver.west.example.com:514' pipelines: - name: syslog-east 8 inputRefs: 9 - audit - application outputRefs: 10 - rsyslog-east - default 11 parse: json 12 labels: secure: \"true\" 13 syslog: \"east\" - name: syslog-west 14 inputRefs: - infrastructure outputRefs: - rsyslog-west - default labels: syslog: \"west\"",
"oc create -f <file-name>.yaml",
"oc delete pod --selector logging-infra=fluentd",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: app-logs 3 type: kafka 4 url: tls://kafka.example.devlab.com:9093/app-topic 5 secret: name: kafka-secret 6 - name: infra-logs type: kafka url: tcp://kafka.devlab2.example.com:9093/infra-topic 7 - name: audit-logs type: kafka url: tls://kafka.qelab.example.com:9093/audit-topic secret: name: kafka-secret-qe pipelines: - name: app-topic 8 inputRefs: 9 - application outputRefs: 10 - app-logs parse: json 11 labels: logType: \"application\" 12 - name: infra-topic 13 inputRefs: - infrastructure outputRefs: - infra-logs labels: logType: \"infra\" - name: audit-topic inputRefs: - audit outputRefs: - audit-logs - default 14 labels: logType: \"audit\"",
"spec: outputs: - name: app-logs type: kafka secret: name: kafka-secret-dev kafka: 1 brokers: 2 - tls://kafka-broker1.example.com:9093/ - tls://kafka-broker2.example.com:9093/ topic: app-topic 3",
"oc create -f <file-name>.yaml",
"oc delete pod --selector logging-infra=fluentd",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' inputs: 7 - name: my-app-logs application: namespaces: - my-project pipelines: - name: forward-to-fluentd-insecure 8 inputRefs: 9 - my-app-logs outputRefs: 10 - fluentd-server-insecure parse: json 11 labels: project: \"my-project\" 12 - name: forward-to-fluentd-secure 13 inputRefs: - application - audit - infrastructure outputRefs: - fluentd-server-secure - default labels: clusterId: \"C1234\"",
"oc create -f <file-name>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: pipelines: - inputRefs: [ myAppLogData ] 3 outputRefs: [ default ] 4 parse: json 5 inputs: 6 - name: myAppLogData application: selector: matchLabels: 7 environment: production app: nginx namespaces: 8 - app1 - app2 outputs: 9 - default",
"- inputRefs: [ myAppLogData, myOtherAppLogData ]",
"oc create -f <file-name>.yaml",
"<store> @type forward <security> self_hostname USD{hostname} shared_key \"fluent-receiver\" </security> transport tls tls_verify_hostname false tls_cert_path '/etc/ocp-forward/ca-bundle.crt' <buffer> @type file path '/var/lib/fluentd/secureforwardlegacy' queued_chunks_limit_size \"1024\" chunk_limit_size \"1m\" flush_interval \"5s\" flush_at_shutdown \"false\" flush_thread_count \"2\" retry_max_interval \"300\" retry_forever true overflow_action \"#{ENV['BUFFER_QUEUE_FULL_ACTION'] || 'throw_exception'}\" </buffer> <server> host fluent-receiver.example.com port 24224 </server> </store>",
"<store> @type forward <security> self_hostname USD{hostname} shared_key <key> 1 </security> transport tls 2 tls_verify_hostname <value> 3 tls_cert_path <path_to_file> 4 <buffer> 5 @type file path '/var/lib/fluentd/secureforwardlegacy' queued_chunks_limit_size \"#{ENV['BUFFER_QUEUE_LIMIT'] || '1024' }\" chunk_limit_size \"#{ENV['BUFFER_SIZE_LIMIT'] || '1m' }\" flush_interval \"#{ENV['FORWARD_FLUSH_INTERVAL'] || '5s'}\" flush_at_shutdown \"#{ENV['FLUSH_AT_SHUTDOWN'] || 'false'}\" flush_thread_count \"#{ENV['FLUSH_THREAD_COUNT'] || 2}\" retry_max_interval \"#{ENV['FORWARD_RETRY_WAIT'] || '300'}\" retry_forever true </buffer> <server> name 6 host 7 hostlabel 8 port 9 </server> <server> 10 name host </server>",
"oc create configmap secure-forward --from-file=secure-forward.conf -n openshift-logging",
"oc delete pod --selector logging-infra=fluentd",
"<store> @type syslog_buffered remote_syslog rsyslogserver.example.com port 514 hostname USD{hostname} remove_tag_prefix tag facility local0 severity info use_record true payload_key message rfc 3164 </store>",
"<store> @type <type> 1 remote_syslog <syslog-server> 2 port 514 3 hostname USD{hostname} remove_tag_prefix <prefix> 4 facility <value> severity <value> use_record <value> payload_key message rfc 3164 5 </store>",
"oc create configmap syslog --from-file=syslog.conf -n openshift-logging",
"oc delete pod --selector logging-infra=fluentd",
"{\"level\":\"info\",\"name\":\"fred\",\"home\":\"bedrock\"}",
"{\"message\":\"{\\\"level\\\":\\\"info\\\",\\\"name\\\":\\\"fred\\\",\\\"home\\\":\\\"bedrock\\\"\", \"more fields...\"}",
"pipelines: - inputRefs: [ application ] outputRefs: myFluentd parse: json",
"{\"structured\": { \"level\": \"info\", \"name\": \"fred\", \"home\": \"bedrock\" }, \"more fields...\"}",
"outputDefaults: - elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat pipelines: - inputRefs: <application> outputRefs: default parse: json 2",
"{ \"structured\":{\"name\":\"fred\",\"home\":\"bedrock\"}, \"kubernetes\":{\"labels\":{\"logFormat\": \"apache\", ...}} }",
"{ \"structured\":{\"name\":\"wilma\",\"home\":\"bedrock\"}, \"kubernetes\":{\"labels\":{\"logFormat\": \"google\", ...}} }",
"outputDefaults: - elasticsearch: structuredTypeKey: openshift.labels.myLabel 1 structuredTypeName: nologformat pipelines: - name: application-logs inputRefs: - application - audit outputRefs: - elasticsearch-secure - default parse: json labels: myLabel: myValue 2",
"{ \"structured\":{\"name\":\"fred\",\"home\":\"bedrock\"}, \"openshift\":{\"labels\":{\"myLabel\": \"myValue\", ...}} }",
"outputDefaults: - elasticsearch: structuredTypeKey: <log record field> structuredTypeName: <name> pipelines: - inputRefs: - application outputRefs: default parse: json",
"oc create -f <file-name>.yaml",
"oc delete pod --selector logging-infra=fluentd",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: eventrouter-template annotations: description: \"A pod forwarding kubernetes events to OpenShift Logging stack.\" tags: \"events,EFK,logging,cluster-logging\" objects: - kind: ServiceAccount 1 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} - kind: ClusterRole 2 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader rules: - apiGroups: [\"\"] resources: [\"events\"] verbs: [\"get\", \"watch\", \"list\"] - kind: ClusterRoleBinding 3 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader-binding subjects: - kind: ServiceAccount name: eventrouter namespace: USD{NAMESPACE} roleRef: kind: ClusterRole name: event-reader - kind: ConfigMap 4 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} data: config.json: |- { \"sink\": \"stdout\" } - kind: Deployment 5 apiVersion: apps/v1 metadata: name: eventrouter namespace: USD{NAMESPACE} labels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" spec: selector: matchLabels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" replicas: 1 template: metadata: labels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" name: eventrouter spec: serviceAccount: eventrouter containers: - name: kube-eventrouter image: USD{IMAGE} imagePullPolicy: IfNotPresent resources: requests: cpu: USD{CPU} memory: USD{MEMORY} volumeMounts: - name: config-volume mountPath: /etc/eventrouter volumes: - name: config-volume configMap: name: eventrouter parameters: - name: IMAGE 6 displayName: Image value: \"registry.redhat.io/openshift-logging/eventrouter-rhel8:v0.3\" - name: CPU 7 displayName: CPU value: \"100m\" - name: MEMORY 8 displayName: Memory value: \"128Mi\" - name: NAMESPACE displayName: Namespace value: \"openshift-logging\" 9",
"oc process -f <templatefile> | oc apply -n openshift-logging -f -",
"oc process -f eventrouter.yaml | oc apply -n openshift-logging -f -",
"serviceaccount/eventrouter created clusterrole.authorization.openshift.io/event-reader created clusterrolebinding.authorization.openshift.io/event-reader-binding created configmap/eventrouter created deployment.apps/eventrouter created",
"oc get pods --selector component=eventrouter -o name -n openshift-logging",
"pod/cluster-logging-eventrouter-d649f97c8-qvv8r",
"oc logs <cluster_logging_eventrouter_pod> -n openshift-logging",
"oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-logging",
"{\"verb\":\"ADDED\",\"event\":{\"metadata\":{\"name\":\"openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f\",\"namespace\":\"openshift-service-catalog-removed\",\"selfLink\":\"/api/v1/namespaces/openshift-service-catalog-removed/events/openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f\",\"uid\":\"787d7b26-3d2f-4017-b0b0-420db4ae62c0\",\"resourceVersion\":\"21399\",\"creationTimestamp\":\"2020-09-08T15:40:26Z\"},\"involvedObject\":{\"kind\":\"Job\",\"namespace\":\"openshift-service-catalog-removed\",\"name\":\"openshift-service-catalog-controller-manager-remover\",\"uid\":\"fac9f479-4ad5-4a57-8adc-cb25d3d9cf8f\",\"apiVersion\":\"batch/v1\",\"resourceVersion\":\"21280\"},\"reason\":\"Completed\",\"message\":\"Job completed\",\"source\":{\"component\":\"job-controller\"},\"firstTimestamp\":\"2020-09-08T15:40:26Z\",\"lastTimestamp\":\"2020-09-08T15:40:26Z\",\"count\":1,\"type\":\"Normal\"}}",
"oc get pod -n openshift-logging --selector component=elasticsearch",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m",
"oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health",
"{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"green\", }",
"oc project openshift-logging",
"oc get cronjob",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s",
"oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices",
"Tue Jun 30 14:30:54 UTC 2020 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144 green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148 green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147 green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0 green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158 green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168 green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146 green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145 green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0 green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148 green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148 green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147 green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0 green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0 green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147 green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220 green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0 green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146 green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57 green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9 green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148 green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148 green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0",
"oc get ds fluentd -o json | grep fluentd-init",
"\"containerName\": \"fluentd-init\"",
"oc get kibana kibana -o json",
"[ { \"clusterCondition\": { \"kibana-5fdd766ffd-nb2jj\": [ { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" }, { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" } ] }, \"deployment\": \"kibana\", \"pods\": { \"failed\": [], \"notReady\": [] \"ready\": [] }, \"replicaSets\": [ \"kibana-5fdd766ffd\" ], \"replicas\": 1 } ]",
"oc get pod -n openshift-logging --selector component=elasticsearch",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m",
"oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health",
"{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"green\", }",
"oc project openshift-logging",
"oc get cronjob",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s",
"oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices",
"Tue Jun 30 14:30:54 UTC 2020 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144 green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148 green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147 green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0 green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158 green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168 green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146 green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145 green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0 green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148 green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148 green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147 green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0 green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0 green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147 green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220 green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0 green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146 green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57 green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9 green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148 green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148 green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0",
"oc get ds fluentd -o json | grep fluentd-init",
"\"containerName\": \"fluentd-init\"",
"oc get kibana kibana -o json",
"[ { \"clusterCondition\": { \"kibana-5fdd766ffd-nb2jj\": [ { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" }, { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" } ] }, \"deployment\": \"kibana\", \"pods\": { \"failed\": [], \"notReady\": [] \"ready\": [] }, \"replicaSets\": [ \"kibana-5fdd766ffd\" ], \"replicas\": 1 } ]",
"oc project openshift-logging",
"oc get clusterlogging instance -o yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging . status: 1 collection: logs: fluentdStatus: daemonSet: fluentd 2 nodes: fluentd-2rhqp: ip-10-0-169-13.ec2.internal fluentd-6fgjh: ip-10-0-165-244.ec2.internal fluentd-6l2ff: ip-10-0-128-218.ec2.internal fluentd-54nx5: ip-10-0-139-30.ec2.internal fluentd-flpnn: ip-10-0-147-228.ec2.internal fluentd-n2frh: ip-10-0-157-45.ec2.internal pods: failed: [] notReady: [] ready: - fluentd-2rhqp - fluentd-54nx5 - fluentd-6fgjh - fluentd-6l2ff - fluentd-flpnn - fluentd-n2frh logstore: 3 elasticsearchStatus: - ShardAllocationEnabled: all cluster: activePrimaryShards: 5 activeShards: 5 initializingShards: 0 numDataNodes: 1 numNodes: 1 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterName: elasticsearch nodeConditions: elasticsearch-cdm-mkkdys93-1: nodeCount: 1 pods: client: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c data: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c master: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c visualization: 4 kibanaStatus: - deployment: kibana pods: failed: [] notReady: [] ready: - kibana-7fb4fd4cc9-f2nls replicaSets: - kibana-7fb4fd4cc9 replicas: 1",
"nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-clientdatamaster-0-1 upgradeStatus: {}",
"nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: \"True\" type: NodeStorage deploymentName: cluster-logging-operator upgradeStatus: {}",
"Elasticsearch Status: Shard Allocation Enabled: shard allocation unknown Cluster: Active Primary Shards: 0 Active Shards: 0 Initializing Shards: 0 Num Data Nodes: 0 Num Nodes: 0 Pending Tasks: 0 Relocating Shards: 0 Status: cluster health unknown Unassigned Shards: 0 Cluster Name: elasticsearch Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: 0/5 nodes are available: 5 node(s) didn't match node selector. Reason: Unschedulable Status: True Type: Unschedulable elasticsearch-cdm-mkkdys93-2: Node Count: 2 Pods: Client: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Data: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Master: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready:",
"Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) Reason: Unschedulable Status: True Type: Unschedulable",
"Status: Collection: Logs: Fluentd Status: Daemon Set: fluentd Nodes: Pods: Failed: Not Ready: Ready:",
"oc project openshift-logging",
"oc describe deployment cluster-logging-operator",
"Name: cluster-logging-operator . Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 62m deployment-controller Scaled up replica set cluster-logging-operator-574b8987df to 1----",
"oc get replicaset",
"NAME DESIRED CURRENT READY AGE cluster-logging-operator-574b8987df 1 1 1 159m elasticsearch-cdm-uhr537yu-1-6869694fb 1 1 1 157m elasticsearch-cdm-uhr537yu-2-857b6d676f 1 1 1 156m elasticsearch-cdm-uhr537yu-3-5b6fdd8cfd 1 1 1 155m kibana-5bd5544f87 1 1 1 157m",
"oc describe replicaset cluster-logging-operator-574b8987df",
"Name: cluster-logging-operator-574b8987df . Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 66m replicaset-controller Created pod: cluster-logging-operator-574b8987df-qjhqv----",
"oc project openshift-logging",
"oc get Elasticsearch",
"NAME AGE elasticsearch 5h9m",
"oc get Elasticsearch <Elasticsearch-instance> -o yaml",
"oc get Elasticsearch elasticsearch -n openshift-logging -o yaml",
"status: 1 cluster: 2 activePrimaryShards: 30 activeShards: 60 initializingShards: 0 numDataNodes: 3 numNodes: 3 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterHealth: \"\" conditions: [] 3 nodes: 4 - deploymentName: elasticsearch-cdm-zjf34ved-1 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-2 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-3 upgradeStatus: {} pods: 5 client: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt data: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt master: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt shardAllocationEnabled: all",
"status: nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}",
"status: nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}",
"status: nodes: - conditions: - lastTransitionTime: 2019-04-10T02:26:24Z message: '0/8 nodes are available: 8 node(s) didn''t match node selector.' reason: Unschedulable status: \"True\" type: Unschedulable",
"status: nodes: - conditions: - last Transition Time: 2019-04-10T05:55:51Z message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) reason: Unschedulable status: True type: Unschedulable",
"status: clusterHealth: \"\" conditions: - lastTransitionTime: 2019-04-17T20:01:31Z message: Wrong RedundancyPolicy selected. Choose different RedundancyPolicy or add more nodes with data roles reason: Invalid Settings status: \"True\" type: InvalidRedundancy",
"status: clusterHealth: green conditions: - lastTransitionTime: '2019-04-17T20:12:34Z' message: >- Invalid master nodes count. Please ensure there are no more than 3 total nodes with master roles reason: Invalid Settings status: 'True' type: InvalidMasters",
"status: clusterHealth: green conditions: - lastTransitionTime: \"2021-05-07T01:05:13Z\" message: Changing the storage structure for a custom resource is not supported reason: StorageStructureChangeIgnored status: 'True' type: StorageStructureChangeIgnored",
"oc get pods --selector component=elasticsearch -o name",
"pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7",
"oc exec elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -- indices",
"Defaulting container name to elasticsearch. Use 'oc describe pod/elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -n openshift-logging' to see all of the containers in this pod. green open infra-000002 S4QANnf1QP6NgCegfnrnbQ 3 1 119926 0 157 78 green open audit-000001 8_EQx77iQCSTzFOXtxRqFw 3 1 0 0 0 0 green open .security iDjscH7aSUGhIdq0LheLBQ 1 1 5 0 0 0 green open .kibana_-377444158_kubeadmin yBywZ9GfSrKebz5gWBZbjw 3 1 1 0 0 0 green open infra-000001 z6Dpe__ORgiopEpW6Yl44A 3 1 871000 0 874 436 green open app-000001 hIrazQCeSISewG3c2VIvsQ 3 1 2453 0 3 1 green open .kibana_1 JCitcBMSQxKOvIq6iQW6wg 1 1 0 0 0 0 green open .kibana_-1595131456_user1 gIYFIEGRRe-ka0W3okS-mQ 3 1 1 0 0 0",
"oc get pods --selector component=elasticsearch -o name",
"pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7",
"oc describe pod elasticsearch-cdm-1godmszn-1-6f8495-vp4lw",
". Status: Running . Containers: elasticsearch: Container ID: cri-o://b7d44e0a9ea486e27f47763f5bb4c39dfd2 State: Running Started: Mon, 08 Jun 2020 10:17:56 -0400 Ready: True Restart Count: 0 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . proxy: Container ID: cri-o://3f77032abaddbb1652c116278652908dc01860320b8a4e741d06894b2f8f9aa1 State: Running Started: Mon, 08 Jun 2020 10:18:38 -0400 Ready: True Restart Count: 0 . Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True . Events: <none>",
"oc get deployment --selector component=elasticsearch -o name",
"deployment.extensions/elasticsearch-cdm-1gon-1 deployment.extensions/elasticsearch-cdm-1gon-2 deployment.extensions/elasticsearch-cdm-1gon-3",
"oc describe deployment elasticsearch-cdm-1gon-1",
". Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . Conditions: Type Status Reason ---- ------ ------ Progressing Unknown DeploymentPaused Available True MinimumReplicasAvailable . Events: <none>",
"oc get replicaSet --selector component=elasticsearch -o name replicaset.extensions/elasticsearch-cdm-1gon-1-6f8495 replicaset.extensions/elasticsearch-cdm-1gon-2-5769cf replicaset.extensions/elasticsearch-cdm-1gon-3-f66f7d",
"oc describe replicaSet elasticsearch-cdm-1gon-1-6f8495",
". Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8@sha256:4265742c7cdd85359140e2d7d703e4311b6497eec7676957f455d6908e7b1c25 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . Events: <none>",
"oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')",
"tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- health",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/nodes?v",
"-n openshift-logging get pods -l component=elasticsearch",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/master?v",
"logs <elasticsearch_master_pod_name> -c elasticsearch -n openshift-logging",
"logs <elasticsearch_node_name> -c elasticsearch -n openshift-logging",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/recovery?active_only=true",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- health |grep number_of_pending_tasks",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/settings?pretty",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/settings?pretty -X PUT -d '{\"persistent\": {\"cluster.routing.allocation.enable\":\"all\"}}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/indices?v",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_cache/clear?pretty",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{\"index.allocation.max_retries\":10}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_search/scroll/_all -X DELETE",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{\"index.unassigned.node_left.delayed_timeout\":\"10m\"}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/indices?v",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_red_index_name> -X DELETE",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_nodes/stats?pretty",
"-n openshift-logging get po -o wide",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/health?pretty | grep unassigned_shards",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"-n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"-n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE",
"-n openshift-logging get po -o wide",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/health?pretty | grep relocating_shards",
"-n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"-n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"-n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"-n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_all/_settings?pretty -X PUT -d '{\"index.blocks.read_only_allow_delete\": null}'",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"-n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"-n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html-single/logging/index |
16.4. Configuration Examples | 16.4. Configuration Examples 16.4.1. Enabling SELinux Labeled NFS Support The following example demonstrates how to enable SELinux labeled NFS support. This example assumes that the nfs-utils package is installed, that the SELinux targeted policy is used, and that SELinux is running in enforcing mode. Note Steps 1-3 are supposed to be performed on the NFS server, nfs-srv . If the NFS server is running, stop it: Confirm that the server is stopped: Edit the /etc/sysconfig/nfs file to set the RPCNFSDARGS flag to "-V 4.2" : Start the server again and confirm that it is running. The output will contain information below, only the time stamp will differ: On the client side, mount the NFS server: All SELinux labels are now successfully passed from the server to the client: Note If you enable labeled NFS support for home directories or other content, the content will be labeled the same as it was on an EXT file system. Also note that mounting systems with different versions of NFS or an attempt to mount a server that does not support labeled NFS could cause errors to be returned. | [
"systemctl stop nfs",
"systemctl status nfs nfs-server.service - NFS Server Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled) Active: inactive (dead)",
"Optional arguments passed to rpc.nfsd. See rpc.nfsd(8) RPCNFSDARGS=\"-V 4.2\"",
"systemctl start nfs",
"systemctl status nfs nfs-server.service - NFS Server Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled) Active: active (exited) since Wed 2013-08-28 14:07:11 CEST; 4s ago",
"mount -o v4.2 server:mntpoint localmountpoint",
"[nfs-srv]USD ls -Z file -rw-rw-r--. user user unconfined_u:object_r:svirt_image_t:s0 file [nfs-client]USD ls -Z file -rw-rw-r--. user user unconfined_u:object_r:svirt_image_t:s0 file"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-nfs-configuration_examples |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate and prioritize your feedback regarding our documentation. Provide as much detail as possible, so that your request can be quickly addressed. Prerequisites You are logged in to the Red Hat Customer Portal. Procedure To provide feedback, perform the following steps: Click the following link: Create Issue Describe the issue or enhancement in the Summary text box. Provide details about the issue or requested enhancement in the Description text box. Type your name in the Reporter text box. Click the Create button. This action creates a documentation ticket and routes it to the appropriate documentation team. Thank you for taking the time to provide feedback. | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_remediating_system_issues_using_red_hat_insights_tasks/proc-providing-feedback-on-redhat-documentation |
function::cpu_clock_us | function::cpu_clock_us Name function::cpu_clock_us - Number of microseconds on the given cpu's clock Synopsis Arguments cpu Which processor's clock to read Description This function returns the number of microseconds on the given cpu's clock. This is always monotonic comparing on the same cpu, but may have some drift between cpus (within about a jiffy). | [
"cpu_clock_us:long(cpu:long)"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-cpu-clock-us |
Chapter 17. Web Servers and Services | Chapter 17. Web Servers and Services Apache HTTP Server 2.4 Version 2.4 of the Apache HTTP Server ( httpd ) is included in Red Hat Enterprise Linux 7, and offers a range of new features: an enhanced version of the "Event" processing module, improving asynchronous request process and performance; native FastCGI support in the mod_proxy module; support for embedded scripting using the Lua language. More information about the features and changes in httpd 2.4 can be found at http://httpd.apache.org/docs/2.4/new_features_2_4.html . A guide to adapting configuration files is also available: http://httpd.apache.org/docs/2.4/upgrading.html . MariaDB 5.5 MariaDB is the default implementation of MySQL in Red Hat Enterprise Linux 7. MariaDB is a community-developed fork of the MySQL database project, and provides a replacement for MySQL. MariaDB preserves API and ABI compatibility with MySQL and adds several new features; for example, a non-blocking client API library, the Aria and XtraDB storage engines with enhanced performance, better server status variables, and enhanced replication. Detailed information about MariaDB can be found at https://mariadb.com/kb/en/what-is-mariadb-55/ . PostgreSQL 9.2 PostgreSQL is an advanced Object-Relational database management system (DBMS). The postgresql packages include the PostgreSQL server package, client programs, and libraries needed to access a PostgreSQL DBMS server. Red Hat Enterprise Linux 7 features version 9.2 of PostgreSQL. For a list of new features, bug fixes and possible incompatibilities against version 8.4 packaged in Red Hat Enterprise Linux 6, please refer to the upstream release notes: http://www.postgresql.org/docs/9.2/static/release-9-0.html http://www.postgresql.org/docs/9.2/static/release-9-1.html http://www.postgresql.org/docs/9.2/static/release-9-2.html Or the PostgreSQL wiki pages: http://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.0 http://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.1 http://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.2 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/chap-red_hat_enterprise_linux-7.0_release_notes-web_servers_and_services |
Chapter 32. Load balancing with MetalLB | Chapter 32. Load balancing with MetalLB 32.1. About MetalLB and the MetalLB Operator As a cluster administrator, you can add the MetalLB Operator to your cluster so that when a service of type LoadBalancer is added to the cluster, MetalLB can add an external IP address for the service. The external IP address is added to the host network for your cluster. 32.1.1. When to use MetalLB Using MetalLB is valuable when you have a bare-metal cluster, or an infrastructure that is like bare metal, and you want fault-tolerant access to an application through an external IP address. You must configure your networking infrastructure to ensure that network traffic for the external IP address is routed from clients to the host network for the cluster. After deploying MetalLB with the MetalLB Operator, when you add a service of type LoadBalancer , MetalLB provides a platform-native load balancer. MetalLB operating in layer2 mode provides support for failover by utilizing a mechanism similar to IP failover. However, instead of relying on the virtual router redundancy protocol (VRRP) and keepalived, MetalLB leverages a gossip-based protocol to identify instances of node failure. When a failover is detected, another node assumes the role of the leader node, and a gratuitous ARP message is dispatched to broadcast this change. MetalLB operating in layer3 or border gateway protocol (BGP) mode delegates failure detection to the network. The BGP router or routers that the OpenShift Container Platform nodes have established a connection with will identify any node failure and terminate the routes to that node. Using MetalLB instead of IP failover is preferable for ensuring high availability of pods and services. 32.1.2. MetalLB Operator custom resources The MetalLB Operator monitors its own namespace for the following custom resources: MetalLB When you add a MetalLB custom resource to the cluster, the MetalLB Operator deploys MetalLB on the cluster. The Operator only supports a single instance of the custom resource. If the instance is deleted, the Operator removes MetalLB from the cluster. IPAddressPool MetalLB requires one or more pools of IP addresses that it can assign to a service when you add a service of type LoadBalancer . An IPAddressPool includes a list of IP addresses. The list can be a single IP address that is set using a range, such as 1.1.1.1-1.1.1.1, a range specified in CIDR notation, a range specified as a starting and ending address separated by a hyphen, or a combination of the three. An IPAddressPool requires a name. The documentation uses names like doc-example , doc-example-reserved , and doc-example-ipv6 . The MetalLB controller assigns IP addresses from a pool of addresses in an IPAddressPool . L2Advertisement and BGPAdvertisement custom resources enable the advertisement of a given IP from a given pool. You can assign IP addresses from an IPAddressPool to services and namespaces by using the spec.serviceAllocation specification in the IPAddressPool custom resource. Note A single IPAddressPool can be referenced by a L2 advertisement and a BGP advertisement. BGPPeer The BGP peer custom resource identifies the BGP router for MetalLB to communicate with, the AS number of the router, the AS number for MetalLB, and customizations for route advertisement. MetalLB advertises the routes for service load-balancer IP addresses to one or more BGP peers. BFDProfile The BFD profile custom resource configures Bidirectional Forwarding Detection (BFD) for a BGP peer. BFD provides faster path failure detection than BGP alone provides. L2Advertisement The L2Advertisement custom resource advertises an IP coming from an IPAddressPool using the L2 protocol. BGPAdvertisement The BGPAdvertisement custom resource advertises an IP coming from an IPAddressPool using the BGP protocol. After you add the MetalLB custom resource to the cluster and the Operator deploys MetalLB, the controller and speaker MetalLB software components begin running. MetalLB validates all relevant custom resources. 32.1.3. MetalLB software components When you install the MetalLB Operator, the metallb-operator-controller-manager deployment starts a pod. The pod is the implementation of the Operator. The pod monitors for changes to all the relevant resources. When the Operator starts an instance of MetalLB, it starts a controller deployment and a speaker daemon set. Note You can configure deployment specifications in the MetalLB custom resource to manage how controller and speaker pods deploy and run in your cluster. For more information about these deployment specifications, see the Additional resources section. controller The Operator starts the deployment and a single pod. When you add a service of type LoadBalancer , Kubernetes uses the controller to allocate an IP address from an address pool. In case of a service failure, verify you have the following entry in your controller pod logs: Example output "event":"ipAllocated","ip":"172.22.0.201","msg":"IP address assigned by controller speaker The Operator starts a daemon set for speaker pods. By default, a pod is started on each node in your cluster. You can limit the pods to specific nodes by specifying a node selector in the MetalLB custom resource when you start MetalLB. If the controller allocated the IP address to the service and service is still unavailable, read the speaker pod logs. If the speaker pod is unavailable, run the oc describe pod -n command. For layer 2 mode, after the controller allocates an IP address for the service, the speaker pods use an algorithm to determine which speaker pod on which node will announce the load balancer IP address. The algorithm involves hashing the node name and the load balancer IP address. For more information, see "MetalLB and external traffic policy". The speaker uses Address Resolution Protocol (ARP) to announce IPv4 addresses and Neighbor Discovery Protocol (NDP) to announce IPv6 addresses. For Border Gateway Protocol (BGP) mode, after the controller allocates an IP address for the service, each speaker pod advertises the load balancer IP address with its BGP peers. You can configure which nodes start BGP sessions with BGP peers. Requests for the load balancer IP address are routed to the node with the speaker that announces the IP address. After the node receives the packets, the service proxy routes the packets to an endpoint for the service. The endpoint can be on the same node in the optimal case, or it can be on another node. The service proxy chooses an endpoint each time a connection is established. 32.1.4. MetalLB and external traffic policy With layer 2 mode, one node in your cluster receives all the traffic for the service IP address. With BGP mode, a router on the host network opens a connection to one of the nodes in the cluster for a new client connection. How your cluster handles the traffic after it enters the node is affected by the external traffic policy. cluster This is the default value for spec.externalTrafficPolicy . With the cluster traffic policy, after the node receives the traffic, the service proxy distributes the traffic to all the pods in your service. This policy provides uniform traffic distribution across the pods, but it obscures the client IP address and it can appear to the application in your pods that the traffic originates from the node rather than the client. local With the local traffic policy, after the node receives the traffic, the service proxy only sends traffic to the pods on the same node. For example, if the speaker pod on node A announces the external service IP, then all traffic is sent to node A. After the traffic enters node A, the service proxy only sends traffic to pods for the service that are also on node A. Pods for the service that are on additional nodes do not receive any traffic from node A. Pods for the service on additional nodes act as replicas in case failover is needed. This policy does not affect the client IP address. Application pods can determine the client IP address from the incoming connections. Note The following information is important when configuring the external traffic policy in BGP mode. Although MetalLB advertises the load balancer IP address from all the eligible nodes, the number of nodes loadbalancing the service can be limited by the capacity of the router to establish equal-cost multipath (ECMP) routes. If the number of nodes advertising the IP is greater than the ECMP group limit of the router, the router will use less nodes than the ones advertising the IP. For example, if the external traffic policy is set to local and the router has an ECMP group limit set to 16 and the pods implementing a LoadBalancer service are deployed on 30 nodes, this would result in pods deployed on 14 nodes not receiving any traffic. In this situation, it would be preferable to set the external traffic policy for the service to cluster . 32.1.5. MetalLB concepts for layer 2 mode In layer 2 mode, the speaker pod on one node announces the external IP address for a service to the host network. From a network perspective, the node appears to have multiple IP addresses assigned to a network interface. Note In layer 2 mode, MetalLB relies on ARP and NDP. These protocols implement local address resolution within a specific subnet. In this context, the client must be able to reach the VIP assigned by MetalLB that exists on the same subnet as the nodes announcing the service in order for MetalLB to work. The speaker pod responds to ARP requests for IPv4 services and NDP requests for IPv6. In layer 2 mode, all traffic for a service IP address is routed through one node. After traffic enters the node, the service proxy for the CNI network provider distributes the traffic to all the pods for the service. Because all traffic for a service enters through a single node in layer 2 mode, in a strict sense, MetalLB does not implement a load balancer for layer 2. Rather, MetalLB implements a failover mechanism for layer 2 so that when a speaker pod becomes unavailable, a speaker pod on a different node can announce the service IP address. When a node becomes unavailable, failover is automatic. The speaker pods on the other nodes detect that a node is unavailable and a new speaker pod and node take ownership of the service IP address from the failed node. The preceding graphic shows the following concepts related to MetalLB: An application is available through a service that has a cluster IP on the 172.130.0.0/16 subnet. That IP address is accessible from inside the cluster. The service also has an external IP address that MetalLB assigned to the service, 192.168.100.200 . Nodes 1 and 3 have a pod for the application. The speaker daemon set runs a pod on each node. The MetalLB Operator starts these pods. Each speaker pod is a host-networked pod. The IP address for the pod is identical to the IP address for the node on the host network. The speaker pod on node 1 uses ARP to announce the external IP address for the service, 192.168.100.200 . The speaker pod that announces the external IP address must be on the same node as an endpoint for the service and the endpoint must be in the Ready condition. Client traffic is routed to the host network and connects to the 192.168.100.200 IP address. After traffic enters the node, the service proxy sends the traffic to the application pod on the same node or another node according to the external traffic policy that you set for the service. If the external traffic policy for the service is set to cluster , the node that advertises the 192.168.100.200 load balancer IP address is selected from the nodes where a speaker pod is running. Only that node can receive traffic for the service. If the external traffic policy for the service is set to local , the node that advertises the 192.168.100.200 load balancer IP address is selected from the nodes where a speaker pod is running and at least an endpoint of the service. Only that node can receive traffic for the service. In the preceding graphic, either node 1 or 3 would advertise 192.168.100.200 . If node 1 becomes unavailable, the external IP address fails over to another node. On another node that has an instance of the application pod and service endpoint, the speaker pod begins to announce the external IP address, 192.168.100.200 and the new node receives the client traffic. In the diagram, the only candidate is node 3. 32.1.6. MetalLB concepts for BGP mode In BGP mode, by default each speaker pod advertises the load balancer IP address for a service to each BGP peer. It is also possible to advertise the IPs coming from a given pool to a specific set of peers by adding an optional list of BGP peers. BGP peers are commonly network routers that are configured to use the BGP protocol. When a router receives traffic for the load balancer IP address, the router picks one of the nodes with a speaker pod that advertised the IP address. The router sends the traffic to that node. After traffic enters the node, the service proxy for the CNI network plugin distributes the traffic to all the pods for the service. The directly-connected router on the same layer 2 network segment as the cluster nodes can be configured as a BGP peer. If the directly-connected router is not configured as a BGP peer, you need to configure your network so that packets for load balancer IP addresses are routed between the BGP peers and the cluster nodes that run the speaker pods. Each time a router receives new traffic for the load balancer IP address, it creates a new connection to a node. Each router manufacturer has an implementation-specific algorithm for choosing which node to initiate the connection with. However, the algorithms commonly are designed to distribute traffic across the available nodes for the purpose of balancing the network load. If a node becomes unavailable, the router initiates a new connection with another node that has a speaker pod that advertises the load balancer IP address. Figure 32.1. MetalLB topology diagram for BGP mode The preceding graphic shows the following concepts related to MetalLB: An application is available through a service that has an IPv4 cluster IP on the 172.130.0.0/16 subnet. That IP address is accessible from inside the cluster. The service also has an external IP address that MetalLB assigned to the service, 203.0.113.200 . Nodes 2 and 3 have a pod for the application. The speaker daemon set runs a pod on each node. The MetalLB Operator starts these pods. You can configure MetalLB to specify which nodes run the speaker pods. Each speaker pod is a host-networked pod. The IP address for the pod is identical to the IP address for the node on the host network. Each speaker pod starts a BGP session with all BGP peers and advertises the load balancer IP addresses or aggregated routes to the BGP peers. The speaker pods advertise that they are part of Autonomous System 65010. The diagram shows a router, R1, as a BGP peer within the same Autonomous System. However, you can configure MetalLB to start BGP sessions with peers that belong to other Autonomous Systems. All the nodes with a speaker pod that advertises the load balancer IP address can receive traffic for the service. If the external traffic policy for the service is set to cluster , all the nodes where a speaker pod is running advertise the 203.0.113.200 load balancer IP address and all the nodes with a speaker pod can receive traffic for the service. The host prefix is advertised to the router peer only if the external traffic policy is set to cluster. If the external traffic policy for the service is set to local , then all the nodes where a speaker pod is running and at least an endpoint of the service is running can advertise the 203.0.113.200 load balancer IP address. Only those nodes can receive traffic for the service. In the preceding graphic, nodes 2 and 3 would advertise 203.0.113.200 . You can configure MetalLB to control which speaker pods start BGP sessions with specific BGP peers by specifying a node selector when you add a BGP peer custom resource. Any routers, such as R1, that are configured to use BGP can be set as BGP peers. Client traffic is routed to one of the nodes on the host network. After traffic enters the node, the service proxy sends the traffic to the application pod on the same node or another node according to the external traffic policy that you set for the service. If a node becomes unavailable, the router detects the failure and initiates a new connection with another node. You can configure MetalLB to use a Bidirectional Forwarding Detection (BFD) profile for BGP peers. BFD provides faster link failure detection so that routers can initiate new connections earlier than without BFD. 32.1.7. Limitations and restrictions 32.1.7.1. Infrastructure considerations for MetalLB MetalLB is primarily useful for on-premise, bare metal installations because these installations do not include a native load-balancer capability. In addition to bare metal installations, installations of OpenShift Container Platform on some infrastructures might not include a native load-balancer capability. For example, the following infrastructures can benefit from adding the MetalLB Operator: Bare metal VMware vSphere IBM Z(R) and IBM(R) LinuxONE IBM Z(R) and IBM(R) LinuxONE for Red Hat Enterprise Linux (RHEL) KVM IBM Power(R) MetalLB Operator and MetalLB are supported with the OpenShift SDN and OVN-Kubernetes network providers. 32.1.7.2. Limitations for layer 2 mode 32.1.7.2.1. Single-node bottleneck MetalLB routes all traffic for a service through a single node, the node can become a bottleneck and limit performance. Layer 2 mode limits the ingress bandwidth for your service to the bandwidth of a single node. This is a fundamental limitation of using ARP and NDP to direct traffic. 32.1.7.2.2. Slow failover performance Failover between nodes depends on cooperation from the clients. When a failover occurs, MetalLB sends gratuitous ARP packets to notify clients that the MAC address associated with the service IP has changed. Most client operating systems handle gratuitous ARP packets correctly and update their neighbor caches promptly. When clients update their caches quickly, failover completes within a few seconds. Clients typically fail over to a new node within 10 seconds. However, some client operating systems either do not handle gratuitous ARP packets at all or have outdated implementations that delay the cache update. Recent versions of common operating systems such as Windows, macOS, and Linux implement layer 2 failover correctly. Issues with slow failover are not expected except for older and less common client operating systems. To minimize the impact from a planned failover on outdated clients, keep the old node running for a few minutes after flipping leadership. The old node can continue to forward traffic for outdated clients until their caches refresh. During an unplanned failover, the service IPs are unreachable until the outdated clients refresh their cache entries. 32.1.7.2.3. Additional Network and MetalLB cannot use same network Using the same VLAN for both MetalLB and an additional network interface set up on a source pod might result in a connection failure. This occurs when both the MetalLB IP and the source pod reside on the same node. To avoid connection failures, place the MetalLB IP in a different subnet from the one where the source pod resides. This configuration ensures that traffic from the source pod will take the default gateway. Consequently, the traffic can effectively reach its destination by using the OVN overlay network, ensuring that the connection functions as intended. 32.1.7.3. Limitations for BGP mode 32.1.7.3.1. Node failure can break all active connections MetalLB shares a limitation that is common to BGP-based load balancing. When a BGP session terminates, such as when a node fails or when a speaker pod restarts, the session termination might result in resetting all active connections. End users can experience a Connection reset by peer message. The consequence of a terminated BGP session is implementation-specific for each router manufacturer. However, you can anticipate that a change in the number of speaker pods affects the number of BGP sessions and that active connections with BGP peers will break. To avoid or reduce the likelihood of a service interruption, you can specify a node selector when you add a BGP peer. By limiting the number of nodes that start BGP sessions, a fault on a node that does not have a BGP session has no affect on connections to the service. 32.1.7.3.2. Support for a single ASN and a single router ID only When you add a BGP peer custom resource, you specify the spec.myASN field to identify the Autonomous System Number (ASN) that MetalLB belongs to. OpenShift Container Platform uses an implementation of BGP with MetalLB that requires MetalLB to belong to a single ASN. If you attempt to add a BGP peer and specify a different value for spec.myASN than an existing BGP peer custom resource, you receive an error. Similarly, when you add a BGP peer custom resource, the spec.routerID field is optional. If you specify a value for this field, you must specify the same value for all other BGP peer custom resources that you add. The limitation to support a single ASN and single router ID is a difference with the community-supported implementation of MetalLB. 32.1.8. Additional resources Comparison: Fault tolerant access to external IP addresses Removing IP failover Deployment specifications for MetalLB 32.2. Installing the MetalLB Operator As a cluster administrator, you can add the MetallB Operator so that the Operator can manage the lifecycle for an instance of MetalLB on your cluster. MetalLB and IP failover are incompatible. If you configured IP failover for your cluster, perform the steps to remove IP failover before you install the Operator. 32.2.1. Installing the MetalLB Operator from the OperatorHub using the web console As a cluster administrator, you can install the MetalLB Operator by using the OpenShift Container Platform web console. Prerequisites Log in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Type a keyword into the Filter by keyword box or scroll to find the Operator you want. For example, type metallb to find the MetalLB Operator. You can also filter options by Infrastructure Features . For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments. On the Install Operator page, accept the defaults and click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Operator is installed in the openshift-operators namespace and that its status is Succeeded . If the Operator is not installed successfully, check the status of the Operator and review the logs: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any pods in the openshift-operators project that are reporting issues. 32.2.2. Installing from OperatorHub using the CLI Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub using the CLI. You can use the OpenShift CLI ( oc ) to install the MetalLB Operator. It is recommended that when using the CLI you install the Operator in the metallb-system namespace. Prerequisites A cluster installed on bare-metal hardware. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a namespace for the MetalLB Operator by entering the following command: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: metallb-system EOF Create an Operator group custom resource (CR) in the namespace: USD cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator namespace: metallb-system EOF Confirm the Operator group is installed in the namespace: USD oc get operatorgroup -n metallb-system Example output NAME AGE metallb-operator 14m Create a Subscription CR: Define the Subscription CR and save the YAML file, for example, metallb-sub.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator-sub namespace: metallb-system spec: channel: stable name: metallb-operator source: redhat-operators 1 sourceNamespace: openshift-marketplace 1 You must specify the redhat-operators value. To create the Subscription CR, run the following command: USD oc create -f metallb-sub.yaml Optional: To ensure BGP and BFD metrics appear in Prometheus, you can label the namespace as in the following command: USD oc label ns metallb-system "openshift.io/cluster-monitoring=true" Verification The verification steps assume the MetalLB Operator is installed in the metallb-system namespace. Confirm the install plan is in the namespace: USD oc get installplan -n metallb-system Example output NAME CSV APPROVAL APPROVED install-wzg94 metallb-operator.4.14.0-nnnnnnnnnnnn Automatic true Note Installation of the Operator might take a few seconds. To verify that the Operator is installed, enter the following command: USD oc get clusterserviceversion -n metallb-system \ -o custom-columns=Name:.metadata.name,Phase:.status.phase Example output Name Phase metallb-operator.4.14.0-nnnnnnnnnnnn Succeeded 32.2.3. Starting MetalLB on your cluster After you install the Operator, you need to configure a single instance of a MetalLB custom resource. After you configure the custom resource, the Operator starts MetalLB on your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the MetalLB Operator. Procedure This procedure assumes the MetalLB Operator is installed in the metallb-system namespace. If you installed using the web console substitute openshift-operators for the namespace. Create a single instance of a MetalLB custom resource: USD cat << EOF | oc apply -f - apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system EOF Verification Confirm that the deployment for the MetalLB controller and the daemon set for the MetalLB speaker are running. Verify that the deployment for the controller is running: USD oc get deployment -n metallb-system controller Example output NAME READY UP-TO-DATE AVAILABLE AGE controller 1/1 1 1 11m Verify that the daemon set for the speaker is running: USD oc get daemonset -n metallb-system speaker Example output NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE speaker 6 6 6 6 6 kubernetes.io/os=linux 18m The example output indicates 6 speaker pods. The number of speaker pods in your cluster might differ from the example output. Make sure the output indicates one pod for each node in your cluster. 32.2.4. Deployment specifications for MetalLB When you start an instance of MetalLB using the MetalLB custom resource, you can configure deployment specifications in the MetalLB custom resource to manage how the controller or speaker pods deploy and run in your cluster. Use these deployment specifications to manage the following tasks: Select nodes for MetalLB pod deployment. Manage scheduling by using pod priority and pod affinity. Assign CPU limits for MetalLB pods. Assign a container RuntimeClass for MetalLB pods. Assign metadata for MetalLB pods. 32.2.4.1. Limit speaker pods to specific nodes By default, when you start MetalLB with the MetalLB Operator, the Operator starts an instance of a speaker pod on each node in the cluster. Only the nodes with a speaker pod can advertise a load balancer IP address. You can configure the MetalLB custom resource with a node selector to specify which nodes run the speaker pods. The most common reason to limit the speaker pods to specific nodes is to ensure that only nodes with network interfaces on specific networks advertise load balancer IP addresses. Only the nodes with a running speaker pod are advertised as destinations of the load balancer IP address. If you limit the speaker pods to specific nodes and specify local for the external traffic policy of a service, then you must ensure that the application pods for the service are deployed to the same nodes. Example configuration to limit speaker pods to worker nodes apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: nodeSelector: 1 node-role.kubernetes.io/worker: "" speakerTolerations: 2 - key: "Example" operator: "Exists" effect: "NoExecute" 1 The example configuration specifies to assign the speaker pods to worker nodes, but you can specify labels that you assigned to nodes or any valid node selector. 2 In this example configuration, the pod that this toleration is attached to tolerates any taint that matches the key value and effect value using the operator . After you apply a manifest with the spec.nodeSelector field, you can check the number of pods that the Operator deployed with the oc get daemonset -n metallb-system speaker command. Similarly, you can display the nodes that match your labels with a command like oc get nodes -l node-role.kubernetes.io/worker= . You can optionally allow the node to control which speaker pods should, or should not, be scheduled on them by using affinity rules. You can also limit these pods by applying a list of tolerations. For more information about affinity rules, taints, and tolerations, see the additional resources. 32.2.4.2. Configuring pod priority and pod affinity in a MetalLB deployment You can optionally assign pod priority and pod affinity rules to controller and speaker pods by configuring the MetalLB custom resource. The pod priority indicates the relative importance of a pod on a node and schedules the pod based on this priority. Set a high priority on your controller or speaker pod to ensure scheduling priority over other pods on the node. Pod affinity manages relationships among pods. Assign pod affinity to the controller or speaker pods to control on what node the scheduler places the pod in the context of pod relationships. For example, you can use pod affinity rules to ensure that certain pods are located on the same node or nodes, which can help improve network communication and reduce latency between those components. Prerequisites You are logged in as a user with cluster-admin privileges. You have installed the MetalLB Operator. You have started the MetalLB Operator on your cluster. Procedure Create a PriorityClass custom resource, such as myPriorityClass.yaml , to configure the priority level. This example defines a PriorityClass named high-priority with a value of 1000000 . Pods that are assigned this priority class are considered higher priority during scheduling compared to pods with lower priority classes: apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority value: 1000000 Apply the PriorityClass custom resource configuration: USD oc apply -f myPriorityClass.yaml Create a MetalLB custom resource, such as MetalLBPodConfig.yaml , to specify the priorityClassName and podAffinity values: apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug controllerConfig: priorityClassName: high-priority 1 affinity: podAffinity: 2 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: metallb topologyKey: kubernetes.io/hostname speakerConfig: priorityClassName: high-priority affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: metallb topologyKey: kubernetes.io/hostname 1 Specifies the priority class for the MetalLB controller pods. In this case, it is set to high-priority . 2 Specifies that you are configuring pod affinity rules. These rules dictate how pods are scheduled in relation to other pods or nodes. This configuration instructs the scheduler to schedule pods that have the label app: metallb onto nodes that share the same hostname. This helps to co-locate MetalLB-related pods on the same nodes, potentially optimizing network communication, latency, and resource usage between these pods. Apply the MetalLB custom resource configuration: USD oc apply -f MetalLBPodConfig.yaml Verification To view the priority class that you assigned to pods in the metallb-system namespace, run the following command: USD oc get pods -n metallb-system -o custom-columns=NAME:.metadata.name,PRIORITY:.spec.priorityClassName Example output NAME PRIORITY controller-584f5c8cd8-5zbvg high-priority metallb-operator-controller-manager-9c8d9985-szkqg <none> metallb-operator-webhook-server-c895594d4-shjgx <none> speaker-dddf7 high-priority To verify that the scheduler placed pods according to pod affinity rules, view the metadata for the pod's node or nodes by running the following command: USD oc get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name -n metallb-system 32.2.4.3. Configuring pod CPU limits in a MetalLB deployment You can optionally assign pod CPU limits to controller and speaker pods by configuring the MetalLB custom resource. Defining CPU limits for the controller or speaker pods helps you to manage compute resources on the node. This ensures all pods on the node have the necessary compute resources to manage workloads and cluster housekeeping. Prerequisites You are logged in as a user with cluster-admin privileges. You have installed the MetalLB Operator. Procedure Create a MetalLB custom resource file, such as CPULimits.yaml , to specify the cpu value for the controller and speaker pods: apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug controllerConfig: resources: limits: cpu: "200m" speakerConfig: resources: limits: cpu: "300m" Apply the MetalLB custom resource configuration: USD oc apply -f CPULimits.yaml Verification To view compute resources for a pod, run the following command, replacing <pod_name> with your target pod: USD oc describe pod <pod_name> 32.2.5. Additional resources Placing pods on specific nodes using node selectors Understanding taints and tolerations Understanding pod priority Understanding pod affinity 32.2.6. steps Configuring MetalLB address pools 32.3. Upgrading the MetalLB If you are currently running version 4.10 or an earlier version of the MetalLB Operator, please note that automatic updates to any version later than 4.10 do not work. Upgrading to a newer version from any version of the MetalLB Operator that is 4.11 or later is successful. For example, upgrading from version 4.12 to version 4.13 will occur smoothly. A summary of the upgrade procedure for the MetalLB Operator from 4.10 and earlier is as follows: Delete the installed MetalLB Operator version for example 4.10. Ensure that the namespace and the metallb custom resource are not removed. Using the CLI, install the MetalLB Operator 4.14 in the same namespace where the version of the MetalLB Operator was installed. Note This procedure does not apply to automatic z-stream updates of the MetalLB Operator, which follow the standard straightforward method. For detailed steps to upgrade the MetalLB Operator from 4.10 and earlier, see the guidance that follows. As a cluster administrator, start the upgrade process by deleting the MetalLB Operator by using the OpenShift CLI ( oc ) or the web console. 32.3.1. Deleting the MetalLB Operator from a cluster using the web console Cluster administrators can delete installed Operators from a selected namespace by using the web console. Prerequisites Access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions. Procedure Navigate to the Operators Installed Operators page. Search for the MetalLB Operator. Then, click on it. On the right side of the Operator Details page, select Uninstall Operator from the Actions drop-down menu. An Uninstall Operator? dialog box is displayed. Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates. Note This action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs. 32.3.2. Deleting MetalLB Operator from a cluster using the CLI Cluster administrators can delete installed Operators from a selected namespace by using the CLI. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. oc command installed on workstation. Procedure Check the current version of the subscribed MetalLB Operator in the currentCSV field: USD oc get subscription metallb-operator -n metallb-system -o yaml | grep currentCSV Example output currentCSV: metallb-operator.4.10.0-202207051316 Delete the subscription: USD oc delete subscription metallb-operator -n metallb-system Example output subscription.operators.coreos.com "metallb-operator" deleted Delete the CSV for the Operator in the target namespace using the currentCSV value from the step: USD oc delete clusterserviceversion metallb-operator.4.10.0-202207051316 -n metallb-system Example output clusterserviceversion.operators.coreos.com "metallb-operator.4.10.0-202207051316" deleted 32.3.3. Editing the MetalLB Operator Operator group When upgrading from any MetalLB Operator version up to and including 4.10 to 4.11 and later, remove spec.targetNamespaces from the Operator group custom resource (CR). You must remove the spec regardless of whether you used the web console or the CLI to delete the MetalLB Operator. Note The MetalLB Operator version 4.11 or later only supports the AllNamespaces install mode, whereas 4.10 or earlier versions support OwnNamespace or SingleNamespace modes. Prerequisites You have access to an OpenShift Container Platform cluster with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). Procedure List the Operator groups in the metallb-system namespace by running the following command: USD oc get operatorgroup -n metallb-system Example output NAME AGE metallb-system-7jc66 85m Verify that the spec.targetNamespaces is present in the Operator group CR associated with the metallb-system namespace by running the following command: USD oc get operatorgroup metallb-system-7jc66 -n metallb-system -o yaml Example output apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: "" creationTimestamp: "2023-10-25T09:42:49Z" generateName: metallb-system- generation: 1 name: metallb-system-7jc66 namespace: metallb-system resourceVersion: "25027" uid: f5f644a0-eef8-4e31-a306-e2bbcfaffab3 spec: targetNamespaces: - metallb-system upgradeStrategy: Default status: lastUpdated: "2023-10-25T09:42:49Z" namespaces: - metallb-system Edit the Operator group and remove the targetNamespaces and metallb-system present under the spec section by running the following command: USD oc edit n metallb-system Example output operatorgroup.operators.coreos.com/metallb-system-7jc66 edited Verify the spec.targetNamespaces is removed from the Operator group custom resource associated with the metallb-system namespace by running the following command: USD oc get operatorgroup metallb-system-7jc66 -n metallb-system -o yaml Example output apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: "" creationTimestamp: "2023-10-25T09:42:49Z" generateName: metallb-system- generation: 2 name: metallb-system-7jc66 namespace: metallb-system resourceVersion: "61658" uid: f5f644a0-eef8-4e31-a306-e2bbcfaffab3 spec: upgradeStrategy: Default status: lastUpdated: "2023-10-25T14:31:30Z" namespaces: - "" 32.3.4. Upgrading the MetalLB Operator Prerequisites Access the cluster as a user with the cluster-admin role. Procedure Verify that the metallb-system namespace still exists: USD oc get namespaces | grep metallb-system Example output metallb-system Active 31m Verify the metallb custom resource still exists: USD oc get metallb -n metallb-system Example output NAME AGE metallb 33m Follow the guidance in "Installing from OperatorHub using the CLI" to install the latest 4.14 version of the MetalLB Operator. Note When installing the latest 4.14 version of the MetalLB Operator, you must install the Operator to the same namespace it was previously installed to. Verify the upgraded version of the Operator is now the 4.14 version. USD oc get csv -n metallb-system Example output NAME DISPLAY VERSION REPLACES PHASE metallb-operator.4.14.0-202207051316 MetalLB Operator 4.14.0-202207051316 Succeeded 32.3.5. Additional resources Deleting Operators from a cluster Installing the MetalLB Operator 32.4. Configuring MetalLB address pools As a cluster administrator, you can add, modify, and delete address pools. The MetalLB Operator uses the address pool custom resources to set the IP addresses that MetalLB can assign to services. The namespace used in the examples assume the namespace is metallb-system . 32.4.1. About the IPAddressPool custom resource Note The address pool custom resource definition (CRD) and API documented in "Load balancing with MetalLB" in OpenShift Container Platform 4.10 can still be used in 4.14. However, the enhanced functionality associated with advertising an IP address from an IPAddressPool with layer 2 protocols, or the BGP protocol, is not supported when using the AddressPool CRD. The fields for the IPAddressPool custom resource are described in the following tables. Table 32.1. MetalLB IPAddressPool pool custom resource Field Type Description metadata.name string Specifies the name for the address pool. When you add a service, you can specify this pool name in the metallb.universe.tf/address-pool annotation to select an IP address from a specific pool. The names doc-example , silver , and gold are used throughout the documentation. metadata.namespace string Specifies the namespace for the address pool. Specify the same namespace that the MetalLB Operator uses. metadata.label string Optional: Specifies the key value pair assigned to the IPAddressPool . This can be referenced by the ipAddressPoolSelectors in the BGPAdvertisement and L2Advertisement CRD to associate the IPAddressPool with the advertisement spec.addresses string Specifies a list of IP addresses for MetalLB Operator to assign to services. You can specify multiple ranges in a single pool; they will all share the same settings. Specify each range in CIDR notation or as starting and ending IP addresses separated with a hyphen. spec.autoAssign boolean Optional: Specifies whether MetalLB automatically assigns IP addresses from this pool. Specify false if you want explicitly request an IP address from this pool with the metallb.universe.tf/address-pool annotation. The default value is true . spec.avoidBuggyIPs boolean Optional: This ensures when enabled that IP addresses ending .0 and .255 are not allocated from the pool. The default value is false . Some older consumer network equipment mistakenly block IP addresses ending in .0 and .255. You can assign IP addresses from an IPAddressPool to services and namespaces by configuring the spec.serviceAllocation specification. Table 32.2. MetalLB IPAddressPool custom resource spec.serviceAllocation subfields Field Type Description priority int Optional: Defines the priority between IP address pools when more than one IP address pool matches a service or namespace. A lower number indicates a higher priority. namespaces array (string) Optional: Specifies a list of namespaces that you can assign to IP addresses in an IP address pool. namespaceSelectors array (LabelSelector) Optional: Specifies namespace labels that you can assign to IP addresses from an IP address pool by using label selectors in a list format. serviceSelectors array (LabelSelector) Optional: Specifies service labels that you can assign to IP addresses from an address pool by using label selectors in a list format. 32.4.2. Configuring an address pool As a cluster administrator, you can add address pools to your cluster to control the IP addresses that MetalLB can assign to load-balancer services. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example labels: 1 zone: east spec: addresses: - 203.0.113.1-203.0.113.10 - 203.0.113.65-203.0.113.75 1 This label assigned to the IPAddressPool can be referenced by the ipAddressPoolSelectors in the BGPAdvertisement CRD to associate the IPAddressPool with the advertisement. Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Verification View the address pool: USD oc describe -n metallb-system IPAddressPool doc-example Example output Name: doc-example Namespace: metallb-system Labels: zone=east Annotations: <none> API Version: metallb.io/v1beta1 Kind: IPAddressPool Metadata: ... Spec: Addresses: 203.0.113.1-203.0.113.10 203.0.113.65-203.0.113.75 Auto Assign: true Events: <none> Confirm that the address pool name, such as doc-example , and the IP address ranges appear in the output. 32.4.3. Configure MetalLB address pool for VLAN As a cluster administrator, you can add address pools to your cluster to control the IP addresses on a created VLAN that MetalLB can assign to load-balancer services Prerequisites Install the OpenShift CLI ( oc ). Configure a separate VLAN. Log in as a user with cluster-admin privileges. Procedure Create a file, such as ipaddresspool-vlan.yaml , that is similar to the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-vlan labels: zone: east 1 spec: addresses: - 192.168.100.1-192.168.100.254 2 1 This label assigned to the IPAddressPool can be referenced by the ipAddressPoolSelectors in the BGPAdvertisement CRD to associate the IPAddressPool with the advertisement. 2 This IP range must match the subnet assigned to the VLAN on your network. To support layer 2 (L2) mode, the IP address range must be within the same subnet as the cluster nodes. Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool-vlan.yaml To ensure this configuration applies to the VLAN you need to set the spec gatewayConfig.ipForwarding to Global . Run the following command to edit the network configuration custom resource (CR): USD oc edit network.config.openshift/cluster Update the spec.defaultNetwork.ovnKubernetesConfig section to include the gatewayConfig.ipForwarding set to Global . It should look something like this: Example ... spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: gatewayConfig: ipForwarding: Global ... 32.4.4. Example address pool configurations 32.4.4.1. Example: IPv4 and CIDR ranges You can specify a range of IP addresses in CIDR notation. You can combine CIDR notation with the notation that uses a hyphen to separate lower and upper bounds. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-cidr namespace: metallb-system spec: addresses: - 192.168.100.0/24 - 192.168.200.0/24 - 192.168.255.1-192.168.255.5 32.4.4.2. Example: Reserve IP addresses You can set the autoAssign field to false to prevent MetalLB from automatically assigning the IP addresses from the pool. When you add a service, you can request a specific IP address from the pool or you can specify the pool name in an annotation to request any IP address from the pool. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-reserved namespace: metallb-system spec: addresses: - 10.0.100.0/28 autoAssign: false 32.4.4.3. Example: IPv4 and IPv6 addresses You can add address pools that use IPv4 and IPv6. You can specify multiple ranges in the addresses list, just like several IPv4 examples. Whether the service is assigned a single IPv4 address, a single IPv6 address, or both is determined by how you add the service. The spec.ipFamilies and spec.ipFamilyPolicy fields control how IP addresses are assigned to the service. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-combined namespace: metallb-system spec: addresses: - 10.0.100.0/28 - 2002:2:2::1-2002:2:2::100 32.4.4.4. Example: Assign IP address pools to services or namespaces You can assign IP addresses from an IPAddressPool to services and namespaces that you specify. If you assign a service or namespace to more than one IP address pool, MetalLB uses an available IP address from the higher-priority IP address pool. If no IP addresses are available from the assigned IP address pools with a high priority, MetalLB uses available IP addresses from an IP address pool with lower priority or no priority. Note You can use the matchLabels label selector, the matchExpressions label selector, or both, for the namespaceSelectors and serviceSelectors specifications. This example demonstrates one label selector for each specification. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-service-allocation namespace: metallb-system spec: addresses: - 192.168.20.0/24 serviceAllocation: priority: 50 1 namespaces: 2 - namespace-a - namespace-b namespaceSelectors: 3 - matchLabels: zone: east serviceSelectors: 4 - matchExpressions: - key: security operator: In values: - S1 1 Assign a priority to the address pool. A lower number indicates a higher priority. 2 Assign one or more namespaces to the IP address pool in a list format. 3 Assign one or more namespace labels to the IP address pool by using label selectors in a list format. 4 Assign one or more service labels to the IP address pool by using label selectors in a list format. 32.4.5. steps Configuring MetalLB with an L2 advertisement and label Configuring MetalLB BGP peers Configuring services to use MetalLB 32.5. About advertising for the IP address pools You can configure MetalLB so that the IP address is advertised with layer 2 protocols, the BGP protocol, or both. With layer 2, MetalLB provides a fault-tolerant external IP address. With BGP, MetalLB provides fault-tolerance for the external IP address and load balancing. MetalLB supports advertising using L2 and BGP for the same set of IP addresses. MetalLB provides the flexibility to assign address pools to specific BGP peers effectively to a subset of nodes on the network. This allows for more complex configurations, for example facilitating the isolation of nodes or the segmentation of the network. 32.5.1. About the BGPAdvertisement custom resource The fields for the BGPAdvertisements object are defined in the following table: Table 32.3. BGPAdvertisements configuration Field Type Description metadata.name string Specifies the name for the BGP advertisement. metadata.namespace string Specifies the namespace for the BGP advertisement. Specify the same namespace that the MetalLB Operator uses. spec.aggregationLength integer Optional: Specifies the number of bits to include in a 32-bit CIDR mask. To aggregate the routes that the speaker advertises to BGP peers, the mask is applied to the routes for several service IP addresses and the speaker advertises the aggregated route. For example, with an aggregation length of 24 , the speaker can aggregate several 10.0.1.x/32 service IP addresses and advertise a single 10.0.1.0/24 route. spec.aggregationLengthV6 integer Optional: Specifies the number of bits to include in a 128-bit CIDR mask. For example, with an aggregation length of 124 , the speaker can aggregate several fc00:f853:0ccd:e799::x/128 service IP addresses and advertise a single fc00:f853:0ccd:e799::0/124 route. spec.communities string Optional: Specifies one or more BGP communities. Each community is specified as two 16-bit values separated by the colon character. Well-known communities must be specified as 16-bit values: NO_EXPORT : 65535:65281 NO_ADVERTISE : 65535:65282 NO_EXPORT_SUBCONFED : 65535:65283 Note You can also use community objects that are created along with the strings. spec.localPref integer Optional: Specifies the local preference for this advertisement. This BGP attribute applies to BGP sessions within the Autonomous System. spec.ipAddressPools string Optional: The list of IPAddressPools to advertise with this advertisement, selected by name. spec.ipAddressPoolSelectors string Optional: A selector for the IPAddressPools that gets advertised with this advertisement. This is for associating the IPAddressPool to the advertisement based on the label assigned to the IPAddressPool instead of the name itself. If no IPAddressPool is selected by this or by the list, the advertisement is applied to all the IPAddressPools . spec.nodeSelectors string Optional: NodeSelectors allows to limit the nodes to announce as hops for the load balancer IP. When empty, all the nodes are announced as hops. spec.peers string Optional: Use a list to specify the metadata.name values for each BGPPeer resource that receives advertisements for the MetalLB service IP address. The MetalLB service IP address is assigned from the IP address pool. By default, the MetalLB service IP address is advertised to all configured BGPPeer resources. Use this field to limit the advertisement to specific BGPpeer resources. 32.5.2. Configuring MetalLB with a BGP advertisement and a basic use case Configure MetalLB as follows so that the peer BGP routers receive one 203.0.113.200/32 route and one fc00:f853:ccd:e799::1/128 route for each load-balancer IP address that MetalLB assigns to a service. Because the localPref and communities fields are not specified, the routes are advertised with localPref set to zero and no BGP communities. 32.5.2.1. Example: Advertise a basic address pool configuration with BGP Configure MetalLB as follows so that the IPAddressPool is advertised with the BGP protocol. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-basic spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a BGP advertisement. Create a file, such as bgpadvertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-basic namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-basic Apply the configuration: USD oc apply -f bgpadvertisement.yaml 32.5.3. Configuring MetalLB with a BGP advertisement and an advanced use case Configure MetalLB as follows so that MetalLB assigns IP addresses to load-balancer services in the ranges between 203.0.113.200 and 203.0.113.203 and between fc00:f853:ccd:e799::0 and fc00:f853:ccd:e799::f . To explain the two BGP advertisements, consider an instance when MetalLB assigns the IP address of 203.0.113.200 to a service. With that IP address as an example, the speaker advertises two routes to BGP peers: 203.0.113.200/32 , with localPref set to 100 and the community set to the numeric value of the NO_ADVERTISE community. This specification indicates to the peer routers that they can use this route but they should not propagate information about this route to BGP peers. 203.0.113.200/30 , aggregates the load-balancer IP addresses assigned by MetalLB into a single route. MetalLB advertises the aggregated route to BGP peers with the community attribute set to 8000:800 . BGP peers propagate the 203.0.113.200/30 route to other BGP peers. When traffic is routed to a node with a speaker, the 203.0.113.200/32 route is used to forward the traffic into the cluster and to a pod that is associated with the service. As you add more services and MetalLB assigns more load-balancer IP addresses from the pool, peer routers receive one local route, 203.0.113.20x/32 , for each service, as well as the 203.0.113.200/30 aggregate route. Each service that you add generates the /30 route, but MetalLB deduplicates the routes to one BGP advertisement before communicating with peer routers. 32.5.3.1. Example: Advertise an advanced address pool configuration with BGP Configure MetalLB as follows so that the IPAddressPool is advertised with the BGP protocol. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-adv labels: zone: east spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 autoAssign: false Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a BGP advertisement. Create a file, such as bgpadvertisement1.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-1 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 65535:65282 aggregationLength: 32 localPref: 100 Apply the configuration: USD oc apply -f bgpadvertisement1.yaml Create a file, such as bgpadvertisement2.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-2 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 8000:800 aggregationLength: 30 aggregationLengthV6: 124 Apply the configuration: USD oc apply -f bgpadvertisement2.yaml 32.5.4. Advertising an IP address pool from a subset of nodes To advertise an IP address from an IP addresses pool, from a specific set of nodes only, use the .spec.nodeSelector specification in the BGPAdvertisement custom resource. This specification associates a pool of IP addresses with a set of nodes in the cluster. This is useful when you have nodes on different subnets in a cluster and you want to advertise an IP addresses from an address pool from a specific subnet, for example a public-facing subnet only. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool by using a custom resource: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400 Control which nodes in the cluster the IP address from pool1 advertises from by defining the .spec.nodeSelector value in the BGPAdvertisement custom resource: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: example spec: ipAddressPools: - pool1 nodeSelector: - matchLabels: kubernetes.io/hostname: NodeA - matchLabels: kubernetes.io/hostname: NodeB In this example, the IP address from pool1 advertises from NodeA and NodeB only. 32.5.5. About the L2Advertisement custom resource The fields for the l2Advertisements object are defined in the following table: Table 32.4. L2 advertisements configuration Field Type Description metadata.name string Specifies the name for the L2 advertisement. metadata.namespace string Specifies the namespace for the L2 advertisement. Specify the same namespace that the MetalLB Operator uses. spec.ipAddressPools string Optional: The list of IPAddressPools to advertise with this advertisement, selected by name. spec.ipAddressPoolSelectors string Optional: A selector for the IPAddressPools that gets advertised with this advertisement. This is for associating the IPAddressPool to the advertisement based on the label assigned to the IPAddressPool instead of the name itself. If no IPAddressPool is selected by this or by the list, the advertisement is applied to all the IPAddressPools . spec.nodeSelectors string Optional: NodeSelectors limits the nodes to announce as hops for the load balancer IP. When empty, all the nodes are announced as hops. Important Limiting the nodes to announce as hops is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . spec.interfaces string Optional: The list of interfaces that are used to announce the load balancer IP. 32.5.6. Configuring MetalLB with an L2 advertisement Configure MetalLB as follows so that the IPAddressPool is advertised with the L2 protocol. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a L2 advertisement. Create a file, such as l2advertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2 Apply the configuration: USD oc apply -f l2advertisement.yaml 32.5.7. Configuring MetalLB with a L2 advertisement and label The ipAddressPoolSelectors field in the BGPAdvertisement and L2Advertisement custom resource definitions is used to associate the IPAddressPool to the advertisement based on the label assigned to the IPAddressPool instead of the name itself. This example shows how to configure MetalLB so that the IPAddressPool is advertised with the L2 protocol by configuring the ipAddressPoolSelectors field. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2-label labels: zone: east spec: addresses: - 172.31.249.87/32 Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a L2 advertisement advertising the IP using ipAddressPoolSelectors . Create a file, such as l2advertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement-label namespace: metallb-system spec: ipAddressPoolSelectors: - matchExpressions: - key: zone operator: In values: - east Apply the configuration: USD oc apply -f l2advertisement.yaml 32.5.8. Configuring MetalLB with an L2 advertisement for selected interfaces By default, the IP addresses from IP address pool that has been assigned to the service, is advertised from all the network interfaces. The interfaces field in the L2Advertisement custom resource definition is used to restrict those network interfaces that advertise the IP address pool. This example shows how to configure MetalLB so that the IP address pool is advertised only from the network interfaces listed in the interfaces field of all nodes. Prerequisites You have installed the OpenShift CLI ( oc ). You are logged in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , and enter the configuration details like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false Apply the configuration for the IP address pool like the following example: USD oc apply -f ipaddresspool.yaml Create a L2 advertisement advertising the IP with interfaces selector. Create a YAML file, such as l2advertisement.yaml , and enter the configuration details like the following example: apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2 interfaces: - interfaceA - interfaceB Apply the configuration for the advertisement like the following example: USD oc apply -f l2advertisement.yaml Important The interface selector does not affect how MetalLB chooses the node to announce a given IP by using L2. The chosen node does not announce the service if the node does not have the selected interface. 32.5.9. Configuring MetalLB with secondary networks From OpenShift Container Platform 4.14 the default network behavior is to not allow forwarding of IP packets between network interfaces. Therefore, when MetalLB is configured on a secondary interface, you need to add a machine configuration to enable IP forwarding for only the required interfaces. Note OpenShift Container Platform clusters upgraded from 4.13 are not affected because a global parameter is set during upgrade to enable global IP forwarding. To enable IP forwarding for the secondary interface, you have two options: Enable IP forwarding for all interfaces. Enable IP forwarding for a specific interface. Note Enabling IP forwarding for a specific interface provides more granular control, while enabling it for all interfaces applies a global setting. Procedure Enable forwarding for a specific secondary interface, such as bridge-net by creating and applying a MachineConfig CR. Create the MachineConfig CR to enable IP forwarding for the specified secondary interface named bridge-net . Save the following YAML in the enable-ip-forward.yaml file: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: <node_role> 1 name: 81-enable-global-forwarding spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,`echo -e "net.ipv4.conf.bridge-net.forwarding = 1\nnet.ipv6.conf.bridge-net.forwarding = 1\nnet.ipv4.conf.bridge-net.rp_filter = 0\nnet.ipv6.conf.bridge-net.rp_filter = 0" | base64 -w0` verification: {} filesystem: root mode: 644 path: /etc/sysctl.d/enable-global-forwarding.conf osImageURL: "" 1 Node role where you want to enable IP forwarding, for example, worker Apply the configuration by running the following command: USD oc apply -f enable-ip-forward.yaml Alternatively, you can enable IP forwarding globally by running the following command: USD oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}} 32.5.10. Additional resources Configuring a community alias . 32.6. Configuring MetalLB BGP peers As a cluster administrator, you can add, modify, and delete Border Gateway Protocol (BGP) peers. The MetalLB Operator uses the BGP peer custom resources to identify which peers that MetalLB speaker pods contact to start BGP sessions. The peers receive the route advertisements for the load-balancer IP addresses that MetalLB assigns to services. 32.6.1. About the BGP peer custom resource The fields for the BGP peer custom resource are described in the following table. Table 32.5. MetalLB BGP peer custom resource Field Type Description metadata.name string Specifies the name for the BGP peer custom resource. metadata.namespace string Specifies the namespace for the BGP peer custom resource. spec.myASN integer Specifies the Autonomous System number for the local end of the BGP session. Specify the same value in all BGP peer custom resources that you add. The range is 0 to 4294967295 . spec.peerASN integer Specifies the Autonomous System number for the remote end of the BGP session. The range is 0 to 4294967295 . spec.peerAddress string Specifies the IP address of the peer to contact for establishing the BGP session. spec.sourceAddress string Optional: Specifies the IP address to use when establishing the BGP session. The value must be an IPv4 address. spec.peerPort integer Optional: Specifies the network port of the peer to contact for establishing the BGP session. The range is 0 to 16384 . spec.holdTime string Optional: Specifies the duration for the hold time to propose to the BGP peer. The minimum value is 3 seconds ( 3s ). The common units are seconds and minutes, such as 3s , 1m , and 5m30s . To detect path failures more quickly, also configure BFD. spec.keepaliveTime string Optional: Specifies the maximum interval between sending keep-alive messages to the BGP peer. If you specify this field, you must also specify a value for the holdTime field. The specified value must be less than the value for the holdTime field. spec.routerID string Optional: Specifies the router ID to advertise to the BGP peer. If you specify this field, you must specify the same value in every BGP peer custom resource that you add. spec.password string Optional: Specifies the MD5 password to send to the peer for routers that enforce TCP MD5 authenticated BGP sessions. spec.passwordSecret string Optional: Specifies name of the authentication secret for the BGP Peer. The secret must live in the metallb namespace and be of type basic-auth. spec.bfdProfile string Optional: Specifies the name of a BFD profile. spec.nodeSelectors object[] Optional: Specifies a selector, using match expressions and match labels, to control which nodes can connect to the BGP peer. spec.ebgpMultiHop boolean Optional: Specifies that the BGP peer is multiple network hops away. If the BGP peer is not directly connected to the same network, the speaker cannot establish a BGP session unless this field is set to true . This field applies to external BGP . External BGP is the term that is used to describe when a BGP peer belongs to a different Autonomous System. Note The passwordSecret field is mutually exclusive with the password field, and contains a reference to a secret containing the password to use. Setting both fields results in a failure of the parsing. 32.6.2. Configuring a BGP peer As a cluster administrator, you can add a BGP peer custom resource to exchange routing information with network routers and advertise the IP addresses for services. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Configure MetalLB with a BGP advertisement. Procedure Create a file, such as bgppeer.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer: USD oc apply -f bgppeer.yaml 32.6.3. Configure a specific set of BGP peers for a given address pool This procedure illustrates how to: Configure a set of address pools ( pool1 and pool2 ). Configure a set of BGP peers ( peer1 and peer2 ). Configure BGP advertisement to assign pool1 to peer1 and pool2 to peer2 . Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create address pool pool1 . Create a file, such as ipaddresspool1.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400 Apply the configuration for the IP address pool pool1 : USD oc apply -f ipaddresspool1.yaml Create address pool pool2 . Create a file, such as ipaddresspool2.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool2 spec: addresses: - 5.5.5.100-5.5.5.200 - 2001:100:5::200-2001:100:5::400 Apply the configuration for the IP address pool pool2 : USD oc apply -f ipaddresspool2.yaml Create BGP peer1 . Create a file, such as bgppeer1.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer1 spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer: USD oc apply -f bgppeer1.yaml Create BGP peer2 . Create a file, such as bgppeer2.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer2 spec: peerAddress: 10.0.0.2 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer2: USD oc apply -f bgppeer2.yaml Create BGP advertisement 1. Create a file, such as bgpadvertisement1.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-1 namespace: metallb-system spec: ipAddressPools: - pool1 peers: - peer1 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100 Apply the configuration: USD oc apply -f bgpadvertisement1.yaml Create BGP advertisement 2. Create a file, such as bgpadvertisement2.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-2 namespace: metallb-system spec: ipAddressPools: - pool2 peers: - peer2 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100 Apply the configuration: USD oc apply -f bgpadvertisement2.yaml 32.6.4. Exposing a service through a network VRF You can expose a service through a virtual routing and forwarding (VRF) instance by associating a VRF on a network interface with a BGP peer. Important Exposing a service through a VRF on a BGP peer is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . By using a VRF on a network interface to expose a service through a BGP peer, you can segregate traffic to the service, configure independent routing decisions, and enable multi-tenancy support on a network interface. Note By establishing a BGP session through an interface belonging to a network VRF, MetalLB can advertise services through that interface and enable external traffic to reach the service through this interface. However, the network VRF routing table is different from the default VRF routing table used by OVN-Kubernetes. Therefore, the traffic cannot reach the OVN-Kubernetes network infrastructure. To enable the traffic directed to the service to reach the OVN-Kubernetes network infrastructure, you must configure routing rules to define the hops for network traffic. See the NodeNetworkConfigurationPolicy resource in "Managing symmetric routing with MetalLB" in the Additional resources section for more information. These are the high-level steps to expose a service through a network VRF with a BGP peer: Define a BGP peer and add a network VRF instance. Specify an IP address pool for MetalLB. Configure a BGP route advertisement for MetalLB to advertise a route using the specified IP address pool and the BGP peer associated with the VRF instance. Deploy a service to test the configuration. Prerequisites You installed the OpenShift CLI ( oc ). You logged in as a user with cluster-admin privileges. You defined a NodeNetworkConfigurationPolicy to associate a Virtual Routing and Forwarding (VRF) instance with a network interface. For more information about completing this prerequisite, see the Additional resources section. You installed MetalLB on your cluster. Procedure Create a BGPPeer custom resources (CR): Create a file, such as frrviavrf.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: frrviavrf namespace: metallb-system spec: myASN: 100 peerASN: 200 peerAddress: 192.168.130.1 vrf: ens4vrf 1 1 Specifies the network VRF instance to associate with the BGP peer. MetalLB can advertise services and make routing decisions based on the routing information in the VRF. Note You must configure this network VRF instance in a NodeNetworkConfigurationPolicy CR. See the Additional resources for more information. Apply the configuration for the BGP peer by running the following command: USD oc apply -f frrviavrf.yaml Create an IPAddressPool CR: Create a file, such as first-pool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.169.10.0/32 Apply the configuration for the IP address pool by running the following command: USD oc apply -f first-pool.yaml Create a BGPAdvertisement CR: Create a file, such as first-adv.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: first-adv namespace: metallb-system spec: ipAddressPools: - first-pool peers: - frrviavrf 1 1 In this example, MetalLB advertises a range of IP addresses from the first-pool IP address pool to the frrviavrf BGP peer. Apply the configuration for the BGP advertisement by running the following command: USD oc apply -f first-adv.yaml Create a Namespace , Deployment , and Service CR: Create a file, such as deploy-service.yaml , with content like the following example: apiVersion: v1 kind: Namespace metadata: name: test --- apiVersion: apps/v1 kind: Deployment metadata: name: server namespace: test spec: selector: matchLabels: app: server template: metadata: labels: app: server spec: containers: - name: server image: registry.redhat.io/ubi9/ubi ports: - name: http containerPort: 30100 command: ["/bin/sh", "-c"] args: ["sleep INF"] --- apiVersion: v1 kind: Service metadata: name: server1 namespace: test spec: ports: - name: http port: 30100 protocol: TCP targetPort: 30100 selector: app: server type: LoadBalancer Apply the configuration for the namespace, deployment, and service by running the following command: USD oc apply -f deploy-service.yaml Verification Identify a MetalLB speaker pod by running the following command: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-c6c5f 6/6 Running 0 69m Verify that the state of the BGP session is Established in the speaker pod by running the following command, replacing the variables to match your configuration: USD oc exec -n metallb-system <speaker_pod> -c frr -- vtysh -c "show bgp vrf <vrf_name> neigh" Example output BGP neighbor is 192.168.30.1, remote AS 200, local AS 100, external link BGP version 4, remote router ID 192.168.30.1, local router ID 192.168.30.71 BGP state = Established, up for 04:20:09 ... Verify that the service is advertised correctly by running the following command: USD oc exec -n metallb-system <speaker_pod> -c frr -- vtysh -c "show bgp vrf <vrf_name> ipv4" Additional resources About virtual routing and forwarding Example: Network interface with a VRF instance node network configuration policy Configuring an egress service Managing symmetric routing with MetalLB 32.6.5. Example BGP peer configurations 32.6.5.1. Example: Limit which nodes connect to a BGP peer You can specify the node selectors field to control which nodes can connect to a BGP peer. apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-nodesel namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 nodeSelectors: - matchExpressions: - key: kubernetes.io/hostname operator: In values: [compute-1.example.com, compute-2.example.com] 32.6.5.2. Example: Specify a BFD profile for a BGP peer You can specify a BFD profile to associate with BGP peers. BFD compliments BGP by providing more rapid detection of communication failures between peers than BGP alone. apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-peer-bfd namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 holdTime: "10s" bfdProfile: doc-example-bfd-profile-full Note Deleting the bidirectional forwarding detection (BFD) profile and removing the bfdProfile added to the border gateway protocol (BGP) peer resource does not disable the BFD. Instead, the BGP peer starts using the default BFD profile. To disable BFD from a BGP peer resource, delete the BGP peer configuration and recreate it without a BFD profile. For more information, see BZ#2050824 . 32.6.5.3. Example: Specify BGP peers for dual-stack networking To support dual-stack networking, add one BGP peer custom resource for IPv4 and one BGP peer custom resource for IPv6. apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv4 namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64500 myASN: 64500 --- apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv6 namespace: metallb-system spec: peerAddress: 2620:52:0:88::104 peerASN: 64500 myASN: 64500 32.6.6. steps Configuring services to use MetalLB 32.7. Configuring community alias As a cluster administrator, you can configure a community alias and use it across different advertisements. 32.7.1. About the community custom resource The community custom resource is a collection of aliases for communities. Users can define named aliases to be used when advertising ipAddressPools using the BGPAdvertisement . The fields for the community custom resource are described in the following table. Note The community CRD applies only to BGPAdvertisement. Table 32.6. MetalLB community custom resource Field Type Description metadata.name string Specifies the name for the community . metadata.namespace string Specifies the namespace for the community . Specify the same namespace that the MetalLB Operator uses. spec.communities string Specifies a list of BGP community aliases that can be used in BGPAdvertisements. A community alias consists of a pair of name (alias) and value (number:number). Link the BGPAdvertisement to a community alias by referring to the alias name in its spec.communities field. Table 32.7. CommunityAlias Field Type Description name string The name of the alias for the community . value string The BGP community value corresponding to the given name. 32.7.2. Configuring MetalLB with a BGP advertisement and community alias Configure MetalLB as follows so that the IPAddressPool is advertised with the BGP protocol and the community alias set to the numeric value of the NO_ADVERTISE community. In the following example, the peer BGP router doc-example-peer-community receives one 203.0.113.200/32 route and one fc00:f853:ccd:e799::1/128 route for each load-balancer IP address that MetalLB assigns to a service. A community alias is configured with the NO_ADVERTISE community. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-community spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a community alias named community1 . apiVersion: metallb.io/v1beta1 kind: Community metadata: name: community1 namespace: metallb-system spec: communities: - name: NO_ADVERTISE value: '65535:65282' Create a BGP peer named doc-example-bgp-peer . Create a file, such as bgppeer.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-bgp-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer: USD oc apply -f bgppeer.yaml Create a BGP advertisement with the community alias. Create a file, such as bgpadvertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgp-community-sample namespace: metallb-system spec: aggregationLength: 32 aggregationLengthV6: 128 communities: - NO_ADVERTISE 1 ipAddressPools: - doc-example-bgp-community peers: - doc-example-peer 1 Specify the CommunityAlias.name here and not the community custom resource (CR) name. Apply the configuration: USD oc apply -f bgpadvertisement.yaml 32.8. Configuring MetalLB BFD profiles As a cluster administrator, you can add, modify, and delete Bidirectional Forwarding Detection (BFD) profiles. The MetalLB Operator uses the BFD profile custom resources to identify which BGP sessions use BFD to provide faster path failure detection than BGP alone provides. 32.8.1. About the BFD profile custom resource The fields for the BFD profile custom resource are described in the following table. Table 32.8. BFD profile custom resource Field Type Description metadata.name string Specifies the name for the BFD profile custom resource. metadata.namespace string Specifies the namespace for the BFD profile custom resource. spec.detectMultiplier integer Specifies the detection multiplier to determine packet loss. The remote transmission interval is multiplied by this value to determine the connection loss detection timer. For example, when the local system has the detect multiplier set to 3 and the remote system has the transmission interval set to 300 , the local system detects failures only after 900 ms without receiving packets. The range is 2 to 255 . The default value is 3 . spec.echoMode boolean Specifies the echo transmission mode. If you are not using distributed BFD, echo transmission mode works only when the peer is also FRR. The default value is false and echo transmission mode is disabled. When echo transmission mode is enabled, consider increasing the transmission interval of control packets to reduce bandwidth usage. For example, consider increasing the transmit interval to 2000 ms. spec.echoInterval integer Specifies the minimum transmission interval, less jitter, that this system uses to send and receive echo packets. The range is 10 to 60000 . The default value is 50 ms. spec.minimumTtl integer Specifies the minimum expected TTL for an incoming control packet. This field applies to multi-hop sessions only. The purpose of setting a minimum TTL is to make the packet validation requirements more stringent and avoid receiving control packets from other sessions. The default value is 254 and indicates that the system expects only one hop between this system and the peer. spec.passiveMode boolean Specifies whether a session is marked as active or passive. A passive session does not attempt to start the connection. Instead, a passive session waits for control packets from a peer before it begins to reply. Marking a session as passive is useful when you have a router that acts as the central node of a star network and you want to avoid sending control packets that you do not need the system to send. The default value is false and marks the session as active. spec.receiveInterval integer Specifies the minimum interval that this system is capable of receiving control packets. The range is 10 to 60000 . The default value is 300 ms. spec.transmitInterval integer Specifies the minimum transmission interval, less jitter, that this system uses to send control packets. The range is 10 to 60000 . The default value is 300 ms. 32.8.2. Configuring a BFD profile As a cluster administrator, you can add a BFD profile and configure a BGP peer to use the profile. BFD provides faster path failure detection than BGP alone. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a file, such as bfdprofile.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BFDProfile metadata: name: doc-example-bfd-profile-full namespace: metallb-system spec: receiveInterval: 300 transmitInterval: 300 detectMultiplier: 3 echoMode: false passiveMode: true minimumTtl: 254 Apply the configuration for the BFD profile: USD oc apply -f bfdprofile.yaml 32.8.3. steps Configure a BGP peer to use the BFD profile. 32.9. Configuring services to use MetalLB As a cluster administrator, when you add a service of type LoadBalancer , you can control how MetalLB assigns an IP address. 32.9.1. Request a specific IP address Like some other load-balancer implementations, MetalLB accepts the spec.loadBalancerIP field in the service specification. If the requested IP address is within a range from any address pool, MetalLB assigns the requested IP address. If the requested IP address is not within any range, MetalLB reports a warning. Example service YAML for a specific IP address apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer loadBalancerIP: <ip_address> If MetalLB cannot assign the requested IP address, the EXTERNAL-IP for the service reports <pending> and running oc describe service <service_name> includes an event like the following example. Example event when MetalLB cannot assign a requested IP address ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning AllocationFailed 3m16s metallb-controller Failed to allocate IP for "default/invalid-request": "4.3.2.1" is not allowed in config 32.9.2. Request an IP address from a specific pool To assign an IP address from a specific range, but you are not concerned with the specific IP address, then you can use the metallb.universe.tf/address-pool annotation to request an IP address from the specified address pool. Example service YAML for an IP address from a specific pool apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer If the address pool that you specify for <address_pool_name> does not exist, MetalLB attempts to assign an IP address from any pool that permits automatic assignment. 32.9.3. Accept any IP address By default, address pools are configured to permit automatic assignment. MetalLB assigns an IP address from these address pools. To accept any IP address from any pool that is configured for automatic assignment, no special annotation or configuration is required. Example service YAML for accepting any IP address apiVersion: v1 kind: Service metadata: name: <service_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer 32.9.4. Share a specific IP address By default, services do not share IP addresses. However, if you need to colocate services on a single IP address, you can enable selective IP sharing by adding the metallb.universe.tf/allow-shared-ip annotation to the services. apiVersion: v1 kind: Service metadata: name: service-http annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: "web-server-svc" 1 spec: ports: - name: http port: 80 2 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 3 type: LoadBalancer loadBalancerIP: 172.31.249.7 4 --- apiVersion: v1 kind: Service metadata: name: service-https annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: "web-server-svc" spec: ports: - name: https port: 443 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> type: LoadBalancer loadBalancerIP: 172.31.249.7 1 Specify the same value for the metallb.universe.tf/allow-shared-ip annotation. This value is referred to as the sharing key . 2 Specify different port numbers for the services. 3 Specify identical pod selectors if you must specify externalTrafficPolicy: local so the services send traffic to the same set of pods. If you use the cluster external traffic policy, then the pod selectors do not need to be identical. 4 Optional: If you specify the three preceding items, MetalLB might colocate the services on the same IP address. To ensure that services share an IP address, specify the IP address to share. By default, Kubernetes does not allow multiprotocol load balancer services. This limitation would normally make it impossible to run a service like DNS that needs to listen on both TCP and UDP. To work around this limitation of Kubernetes with MetalLB, create two services: For one service, specify TCP and for the second service, specify UDP. In both services, specify the same pod selector. Specify the same sharing key and spec.loadBalancerIP value to colocate the TCP and UDP services on the same IP address. 32.9.5. Configuring a service with MetalLB You can configure a load-balancing service to use an external IP address from an address pool. Prerequisites Install the OpenShift CLI ( oc ). Install the MetalLB Operator and start MetalLB. Configure at least one address pool. Configure your network to route traffic from the clients to the host network for the cluster. Procedure Create a <service_name>.yaml file. In the file, ensure that the spec.type field is set to LoadBalancer . Refer to the examples for information about how to request the external IP address that MetalLB assigns to the service. Create the service: USD oc apply -f <service_name>.yaml Example output service/<service_name> created Verification Describe the service: USD oc describe service <service_name> Example output 1 The annotation is present if you request an IP address from a specific pool. 2 The service type must indicate LoadBalancer . 3 The load-balancer ingress field indicates the external IP address if the service is assigned correctly. 4 The events field indicates the node name that is assigned to announce the external IP address. If you experience an error, the events field indicates the reason for the error. 32.10. Managing symmetric routing with MetalLB As a cluster administrator, you can effectively manage traffic for pods behind a MetalLB load-balancer service with multiple host interfaces by implementing features from MetalLB, NMState, and OVN-Kubernetes. By combining these features in this context, you can provide symmetric routing, traffic segregation, and support clients on different networks with overlapping CIDR addresses. To achieve this functionality, learn how to implement virtual routing and forwarding (VRF) instances with MetalLB, and configure egress services. Important Configuring symmetric traffic by using a VRF instance with MetalLB and an egress service is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 32.10.1. Challenges of managing symmetric routing with MetalLB When you use MetalLB with multiple host interfaces, MetalLB exposes and announces a service through all available interfaces on the host. This can present challenges relating to network isolation, asymmetric return traffic and overlapping CIDR addresses. One option to ensure that return traffic reaches the correct client is to use static routes. However, with this solution, MetalLB cannot isolate the services and then announce each service through a different interface. Additionally, static routing requires manual configuration and requires maintenance if remote sites are added. A further challenge of symmetric routing when implementing a MetalLB service is scenarios where external systems expect the source and destination IP address for an application to be the same. The default behavior for OpenShift Container Platform is to assign the IP address of the host network interface as the source IP address for traffic originating from pods. This is problematic with multiple host interfaces. You can overcome these challenges by implementing a configuration that combines features from MetalLB, NMState, and OVN-Kubernetes. 32.10.2. Overview of managing symmetric routing by using VRFs with MetalLB You can overcome the challenges of implementing symmetric routing by using NMState to configure a VRF instance on a host, associating the VRF instance with a MetalLB BGPPeer resource, and configuring an egress service for egress traffic with OVN-Kubernetes. Figure 32.2. Network overview of managing symmetric routing by using VRFs with MetalLB The configuration process involves three stages: 1. Define a VRF and routing rules Configure a NodeNetworkConfigurationPolicy custom resource (CR) to associate a VRF instance with a network interface. Use the VRF routing table to direct ingress and egress traffic. 2. Link the VRF to a MetalLB BGPPeer Configure a MetalLB BGPPeer resource to use the VRF instance on a network interface. By associating the BGPPeer resource with the VRF instance, the designated network interface becomes the primary interface for the BGP session, and MetalLB advertises the services through this interface. 3. Configure an egress service Configure an egress service to choose the network associated with the VRF instance for egress traffic. Optional: Configure an egress service to use the IP address of the MetalLB load-balancer service as the source IP for egress traffic. 32.10.3. Configuring symmetric routing by using VRFs with MetalLB You can configure symmetric network routing for applications behind a MetalLB service that require the same ingress and egress network paths. This example associates a VRF routing table with MetalLB and an egress service to enable symmetric routing for ingress and egress traffic for pods behind a LoadBalancer service. Note If you use the sourceIPBy: "LoadBalancerIP" setting in the EgressService CR, you must specify the load-balancer node in the BGPAdvertisement custom resource (CR). You can use the sourceIPBy: "Network" setting on clusters that use OVN-Kubernetes configured with the gatewayConfig.routingViaHost specification set to true only. Additionally, if you use the sourceIPBy: "Network" setting, you must schedule the application workload on nodes configured with the network VRF instance. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the Kubernetes NMState Operator. Install the MetalLB Operator. Procedure Create a NodeNetworkConfigurationPolicy CR to define the VRF instance: Create a file, such as node-network-vrf.yaml , with content like the following example: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vrfpolicy 1 spec: nodeSelector: vrf: "true" 2 maxUnavailable: 3 desiredState: interfaces: - name: ens4vrf 3 type: vrf 4 state: up vrf: port: - ens4 5 route-table-id: 2 6 - name: ens4 7 type: ethernet state: up ipv4: address: - ip: 192.168.130.130 prefix-length: 24 dhcp: false enabled: true routes: 8 config: - destination: 0.0.0.0/0 metric: 150 -hop-address: 192.168.130.1 -hop-interface: ens4 table-id: 2 route-rules: 9 config: - ip-to: 172.30.0.0/16 priority: 998 route-table: 254 10 - ip-to: 10.132.0.0/14 priority: 998 route-table: 254 1 The name of the policy. 2 This example applies the policy to all nodes with the label vrf:true . 3 The name of the interface. 4 The type of interface. This example creates a VRF instance. 5 The node interface that the VRF attaches to. 6 The name of the route table ID for the VRF. 7 The IPv4 address of the interface associated with the VRF. 8 Defines the configuration for network routes. The -hop-address field defines the IP address of the hop for the route. The -hop-interface field defines the outgoing interface for the route. In this example, the VRF routing table is 2 , which references the ID that you define in the EgressService CR. 9 Defines additional route rules. The ip-to fields must match the Cluster Network CIDR and Service Network CIDR. You can view the values for these CIDR address specifications by running the following command: oc describe network.config/cluster . 10 The main routing table that the Linux kernel uses when calculating routes has the ID 254 . Apply the policy by running the following command: USD oc apply -f node-network-vrf.yaml Create a BGPPeer custom resource (CR): Create a file, such as frr-via-vrf.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: frrviavrf namespace: metallb-system spec: myASN: 100 peerASN: 200 peerAddress: 192.168.130.1 vrf: ens4vrf 1 1 Specifies the VRF instance to associate with the BGP peer. MetalLB can advertise services and make routing decisions based on the routing information in the VRF. Apply the configuration for the BGP peer by running the following command: USD oc apply -f frr-via-vrf.yaml Create an IPAddressPool CR: Create a file, such as first-pool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.169.10.0/32 Apply the configuration for the IP address pool by running the following command: USD oc apply -f first-pool.yaml Create a BGPAdvertisement CR: Create a file, such as first-adv.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: first-adv namespace: metallb-system spec: ipAddressPools: - first-pool peers: - frrviavrf 1 nodeSelectors: - matchLabels: egress-service.k8s.ovn.org/test-server1: "" 2 1 In this example, MetalLB advertises a range of IP addresses from the first-pool IP address pool to the frrviavrf BGP peer. 2 In this example, the EgressService CR configures the source IP address for egress traffic to use the load-balancer service IP address. Therefore, you must specify the load-balancer node for return traffic to use the same return path for the traffic originating from the pod. Apply the configuration for the BGP advertisement by running the following command: USD oc apply -f first-adv.yaml Create an EgressService CR: Create a file, such as egress-service.yaml , with content like the following example: apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: server1 1 namespace: test 2 spec: sourceIPBy: "LoadBalancerIP" 3 nodeSelector: matchLabels: vrf: "true" 4 network: "2" 5 1 Specify the name for the egress service. The name of the EgressService resource must match the name of the load-balancer service that you want to modify. 2 Specify the namespace for the egress service. The namespace for the EgressService must match the namespace of the load-balancer service that you want to modify. The egress service is namespace-scoped. 3 This example assigns the LoadBalancer service ingress IP address as the source IP address for egress traffic. 4 If you specify LoadBalancer for the sourceIPBy specification, a single node handles the LoadBalancer service traffic. In this example, only a node with the label vrf: "true" can handle the service traffic. If you do not specify a node, OVN-Kubernetes selects a worker node to handle the service traffic. When a node is selected, OVN-Kubernetes labels the node in the following format: egress-service.k8s.ovn.org/<svc_namespace>-<svc_name>: "" . 5 Specify the routing table for egress traffic. Apply the configuration for the egress service by running the following command: USD oc apply -f egress-service.yaml Verification Verify that you can access the application endpoint of the pods running behind the MetalLB service by running the following command: USD curl <external_ip_address>:<port_number> 1 1 Update the external IP address and port number to suit your application endpoint. Optional: If you assigned the LoadBalancer service ingress IP address as the source IP address for egress traffic, verify this configuration by using tools such as tcpdump to analyze packets received at the external client. Additional resources About virtual routing and forwarding Exposing a service through a network VRF Example: Network interface with a VRF instance node network configuration policy Configuring an egress service 32.11. MetalLB logging, troubleshooting, and support If you need to troubleshoot MetalLB configuration, see the following sections for commonly used commands. 32.11.1. Setting the MetalLB logging levels MetalLB uses FRRouting (FRR) in a container with the default setting of info generates a lot of logging. You can control the verbosity of the logs generated by setting the logLevel as illustrated in this example. Gain a deeper insight into MetalLB by setting the logLevel to debug as follows: Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Create a file, such as setdebugloglevel.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug nodeSelector: node-role.kubernetes.io/worker: "" Apply the configuration: USD oc replace -f setdebugloglevel.yaml Note Use oc replace as the understanding is the metallb CR is already created and here you are changing the log level. Display the names of the speaker pods: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-2m9pm 4/4 Running 0 9m19s speaker-7m4qw 3/4 Running 0 19s speaker-szlmx 4/4 Running 0 9m19s Note Speaker and controller pods are recreated to ensure the updated logging level is applied. The logging level is modified for all the components of MetalLB. View the speaker logs: USD oc logs -n metallb-system speaker-7m4qw -c speaker Example output View the FRR logs: USD oc logs -n metallb-system speaker-7m4qw -c frr Example output 32.11.1.1. FRRouting (FRR) log levels The following table describes the FRR logging levels. Table 32.9. Log levels Log level Description all Supplies all logging information for all logging levels. debug Information that is diagnostically helpful to people. Set to debug to give detailed troubleshooting information. info Provides information that always should be logged but under normal circumstances does not require user intervention. This is the default logging level. warn Anything that can potentially cause inconsistent MetalLB behaviour. Usually MetalLB automatically recovers from this type of error. error Any error that is fatal to the functioning of MetalLB . These errors usually require administrator intervention to fix. none Turn off all logging. 32.11.2. Troubleshooting BGP issues The BGP implementation that Red Hat supports uses FRRouting (FRR) in a container in the speaker pods. As a cluster administrator, if you need to troubleshoot BGP configuration issues, you need to run commands in the FRR container. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Display the names of the speaker pods: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 56m speaker-gvfnf 4/4 Running 0 56m ... Display the running configuration for FRR: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show running-config" Example output 1 The router bgp section indicates the ASN for MetalLB. 2 Confirm that a neighbor <ip-address> remote-as <peer-ASN> line exists for each BGP peer custom resource that you added. 3 If you configured BFD, confirm that the BFD profile is associated with the correct BGP peer and that the BFD profile appears in the command output. 4 Confirm that the network <ip-address-range> lines match the IP address ranges that you specified in address pool custom resources that you added. Display the BGP summary: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bgp summary" Example output 1 Confirm that the output includes a line for each BGP peer custom resource that you added. 2 Output that shows 0 messages received and messages sent indicates a BGP peer that does not have a BGP session. Check network connectivity and the BGP configuration of the BGP peer. Display the BGP peers that received an address pool: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bgp ipv4 unicast 203.0.113.200/30" Replace ipv4 with ipv6 to display the BGP peers that received an IPv6 address pool. Replace 203.0.113.200/30 with an IPv4 or IPv6 IP address range from an address pool. Example output 1 Confirm that the output includes an IP address for a BGP peer. 32.11.3. Troubleshooting BFD issues The Bidirectional Forwarding Detection (BFD) implementation that Red Hat supports uses FRRouting (FRR) in a container in the speaker pods. The BFD implementation relies on BFD peers also being configured as BGP peers with an established BGP session. As a cluster administrator, if you need to troubleshoot BFD configuration issues, you need to run commands in the FRR container. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Display the names of the speaker pods: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 26m speaker-gvfnf 4/4 Running 0 26m ... Display the BFD peers: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bfd peers brief" Example output <.> Confirm that the PeerAddress column includes each BFD peer. If the output does not list a BFD peer IP address that you expected the output to include, troubleshoot BGP connectivity with the peer. If the status field indicates down , check for connectivity on the links and equipment between the node and the peer. You can determine the node name for the speaker pod with a command like oc get pods -n metallb-system speaker-66bth -o jsonpath='{.spec.nodeName}' . 32.11.4. MetalLB metrics for BGP and BFD OpenShift Container Platform captures the following Prometheus metrics for MetalLB that relate to BGP peers and BFD profiles. metallb_bfd_control_packet_input counts the number of BFD control packets received from each BFD peer. metallb_bfd_control_packet_output counts the number of BFD control packets sent to each BFD peer. metallb_bfd_echo_packet_input counts the number of BFD echo packets received from each BFD peer. metallb_bfd_echo_packet_output counts the number of BFD echo packets sent to each BFD peer. metallb_bfd_session_down_events counts the number of times the BFD session with a peer entered the down state. metallb_bfd_session_up indicates the connection state with a BFD peer. 1 indicates the session is up and 0 indicates the session is down . metallb_bfd_session_up_events counts the number of times the BFD session with a peer entered the up state. metallb_bfd_zebra_notifications counts the number of BFD Zebra notifications for each BFD peer. metallb_bgp_announced_prefixes_total counts the number of load balancer IP address prefixes that are advertised to BGP peers. The terms prefix and aggregated route have the same meaning. metallb_bgp_session_up indicates the connection state with a BGP peer. 1 indicates the session is up and 0 indicates the session is down . metallb_bgp_updates_total counts the number of BGP update messages that were sent to a BGP peer. Additional resources See Querying metrics for all projects with the monitoring dashboard for information about using the monitoring dashboard. 32.11.5. About collecting MetalLB data You can use the oc adm must-gather CLI command to collect information about your cluster, your MetalLB configuration, and the MetalLB Operator. The following features and objects are associated with MetalLB and the MetalLB Operator: The namespace and child objects that the MetalLB Operator is deployed in All MetalLB Operator custom resource definitions (CRDs) The oc adm must-gather CLI command collects the following information from FRRouting (FRR) that Red Hat uses to implement BGP and BFD: /etc/frr/frr.conf /etc/frr/frr.log /etc/frr/daemons configuration file /etc/frr/vtysh.conf The log and configuration files in the preceding list are collected from the frr container in each speaker pod. In addition to the log and configuration files, the oc adm must-gather CLI command collects the output from the following vtysh commands: show running-config show bgp ipv4 show bgp ipv6 show bgp neighbor show bfd peer No additional configuration is required when you run the oc adm must-gather CLI command. Additional resources Gathering data about your cluster | [
"\"event\":\"ipAllocated\",\"ip\":\"172.22.0.201\",\"msg\":\"IP address assigned by controller",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: metallb-system EOF",
"cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator namespace: metallb-system EOF",
"oc get operatorgroup -n metallb-system",
"NAME AGE metallb-operator 14m",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator-sub namespace: metallb-system spec: channel: stable name: metallb-operator source: redhat-operators 1 sourceNamespace: openshift-marketplace",
"oc create -f metallb-sub.yaml",
"oc label ns metallb-system \"openshift.io/cluster-monitoring=true\"",
"oc get installplan -n metallb-system",
"NAME CSV APPROVAL APPROVED install-wzg94 metallb-operator.4.14.0-nnnnnnnnnnnn Automatic true",
"oc get clusterserviceversion -n metallb-system -o custom-columns=Name:.metadata.name,Phase:.status.phase",
"Name Phase metallb-operator.4.14.0-nnnnnnnnnnnn Succeeded",
"cat << EOF | oc apply -f - apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system EOF",
"oc get deployment -n metallb-system controller",
"NAME READY UP-TO-DATE AVAILABLE AGE controller 1/1 1 1 11m",
"oc get daemonset -n metallb-system speaker",
"NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE speaker 6 6 6 6 6 kubernetes.io/os=linux 18m",
"apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: nodeSelector: 1 node-role.kubernetes.io/worker: \"\" speakerTolerations: 2 - key: \"Example\" operator: \"Exists\" effect: \"NoExecute\"",
"apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority value: 1000000",
"oc apply -f myPriorityClass.yaml",
"apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug controllerConfig: priorityClassName: high-priority 1 affinity: podAffinity: 2 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: metallb topologyKey: kubernetes.io/hostname speakerConfig: priorityClassName: high-priority affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: metallb topologyKey: kubernetes.io/hostname",
"oc apply -f MetalLBPodConfig.yaml",
"oc get pods -n metallb-system -o custom-columns=NAME:.metadata.name,PRIORITY:.spec.priorityClassName",
"NAME PRIORITY controller-584f5c8cd8-5zbvg high-priority metallb-operator-controller-manager-9c8d9985-szkqg <none> metallb-operator-webhook-server-c895594d4-shjgx <none> speaker-dddf7 high-priority",
"oc get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name -n metallb-system",
"apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug controllerConfig: resources: limits: cpu: \"200m\" speakerConfig: resources: limits: cpu: \"300m\"",
"oc apply -f CPULimits.yaml",
"oc describe pod <pod_name>",
"oc get subscription metallb-operator -n metallb-system -o yaml | grep currentCSV",
"currentCSV: metallb-operator.4.10.0-202207051316",
"oc delete subscription metallb-operator -n metallb-system",
"subscription.operators.coreos.com \"metallb-operator\" deleted",
"oc delete clusterserviceversion metallb-operator.4.10.0-202207051316 -n metallb-system",
"clusterserviceversion.operators.coreos.com \"metallb-operator.4.10.0-202207051316\" deleted",
"oc get operatorgroup -n metallb-system",
"NAME AGE metallb-system-7jc66 85m",
"oc get operatorgroup metallb-system-7jc66 -n metallb-system -o yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: \"\" creationTimestamp: \"2023-10-25T09:42:49Z\" generateName: metallb-system- generation: 1 name: metallb-system-7jc66 namespace: metallb-system resourceVersion: \"25027\" uid: f5f644a0-eef8-4e31-a306-e2bbcfaffab3 spec: targetNamespaces: - metallb-system upgradeStrategy: Default status: lastUpdated: \"2023-10-25T09:42:49Z\" namespaces: - metallb-system",
"oc edit n metallb-system",
"operatorgroup.operators.coreos.com/metallb-system-7jc66 edited",
"oc get operatorgroup metallb-system-7jc66 -n metallb-system -o yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: \"\" creationTimestamp: \"2023-10-25T09:42:49Z\" generateName: metallb-system- generation: 2 name: metallb-system-7jc66 namespace: metallb-system resourceVersion: \"61658\" uid: f5f644a0-eef8-4e31-a306-e2bbcfaffab3 spec: upgradeStrategy: Default status: lastUpdated: \"2023-10-25T14:31:30Z\" namespaces: - \"\"",
"oc get namespaces | grep metallb-system",
"metallb-system Active 31m",
"oc get metallb -n metallb-system",
"NAME AGE metallb 33m",
"oc get csv -n metallb-system",
"NAME DISPLAY VERSION REPLACES PHASE metallb-operator.4.14.0-202207051316 MetalLB Operator 4.14.0-202207051316 Succeeded",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example labels: 1 zone: east spec: addresses: - 203.0.113.1-203.0.113.10 - 203.0.113.65-203.0.113.75",
"oc apply -f ipaddresspool.yaml",
"oc describe -n metallb-system IPAddressPool doc-example",
"Name: doc-example Namespace: metallb-system Labels: zone=east Annotations: <none> API Version: metallb.io/v1beta1 Kind: IPAddressPool Metadata: Spec: Addresses: 203.0.113.1-203.0.113.10 203.0.113.65-203.0.113.75 Auto Assign: true Events: <none>",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-vlan labels: zone: east 1 spec: addresses: - 192.168.100.1-192.168.100.254 2",
"oc apply -f ipaddresspool-vlan.yaml",
"oc edit network.config.openshift/cluster",
"spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: gatewayConfig: ipForwarding: Global",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-cidr namespace: metallb-system spec: addresses: - 192.168.100.0/24 - 192.168.200.0/24 - 192.168.255.1-192.168.255.5",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-reserved namespace: metallb-system spec: addresses: - 10.0.100.0/28 autoAssign: false",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-combined namespace: metallb-system spec: addresses: - 10.0.100.0/28 - 2002:2:2::1-2002:2:2::100",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-service-allocation namespace: metallb-system spec: addresses: - 192.168.20.0/24 serviceAllocation: priority: 50 1 namespaces: 2 - namespace-a - namespace-b namespaceSelectors: 3 - matchLabels: zone: east serviceSelectors: 4 - matchExpressions: - key: security operator: In values: - S1",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-basic spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-basic namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-basic",
"oc apply -f bgpadvertisement.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-adv labels: zone: east spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 autoAssign: false",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-1 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 65535:65282 aggregationLength: 32 localPref: 100",
"oc apply -f bgpadvertisement1.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-2 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 8000:800 aggregationLength: 30 aggregationLengthV6: 124",
"oc apply -f bgpadvertisement2.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: example spec: ipAddressPools: - pool1 nodeSelector: - matchLabels: kubernetes.io/hostname: NodeA - matchLabels: kubernetes.io/hostname: NodeB",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2",
"oc apply -f l2advertisement.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2-label labels: zone: east spec: addresses: - 172.31.249.87/32",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement-label namespace: metallb-system spec: ipAddressPoolSelectors: - matchExpressions: - key: zone operator: In values: - east",
"oc apply -f l2advertisement.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2 interfaces: - interfaceA - interfaceB",
"oc apply -f l2advertisement.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: <node_role> 1 name: 81-enable-global-forwarding spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,`echo -e \"net.ipv4.conf.bridge-net.forwarding = 1\\nnet.ipv6.conf.bridge-net.forwarding = 1\\nnet.ipv4.conf.bridge-net.rp_filter = 0\\nnet.ipv6.conf.bridge-net.rp_filter = 0\" | base64 -w0` verification: {} filesystem: root mode: 644 path: /etc/sysctl.d/enable-global-forwarding.conf osImageURL: \"\"",
"oc apply -f enable-ip-forward.yaml",
"oc patch network.operator cluster -p '{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\":{\"gatewayConfig\":{\"ipForwarding\": \"Global\"}}}}}",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10",
"oc apply -f bgppeer.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400",
"oc apply -f ipaddresspool1.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool2 spec: addresses: - 5.5.5.100-5.5.5.200 - 2001:100:5::200-2001:100:5::400",
"oc apply -f ipaddresspool2.yaml",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer1 spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10",
"oc apply -f bgppeer1.yaml",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer2 spec: peerAddress: 10.0.0.2 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10",
"oc apply -f bgppeer2.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-1 namespace: metallb-system spec: ipAddressPools: - pool1 peers: - peer1 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100",
"oc apply -f bgpadvertisement1.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-2 namespace: metallb-system spec: ipAddressPools: - pool2 peers: - peer2 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100",
"oc apply -f bgpadvertisement2.yaml",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: frrviavrf namespace: metallb-system spec: myASN: 100 peerASN: 200 peerAddress: 192.168.130.1 vrf: ens4vrf 1",
"oc apply -f frrviavrf.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.169.10.0/32",
"oc apply -f first-pool.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: first-adv namespace: metallb-system spec: ipAddressPools: - first-pool peers: - frrviavrf 1",
"oc apply -f first-adv.yaml",
"apiVersion: v1 kind: Namespace metadata: name: test --- apiVersion: apps/v1 kind: Deployment metadata: name: server namespace: test spec: selector: matchLabels: app: server template: metadata: labels: app: server spec: containers: - name: server image: registry.redhat.io/ubi9/ubi ports: - name: http containerPort: 30100 command: [\"/bin/sh\", \"-c\"] args: [\"sleep INF\"] --- apiVersion: v1 kind: Service metadata: name: server1 namespace: test spec: ports: - name: http port: 30100 protocol: TCP targetPort: 30100 selector: app: server type: LoadBalancer",
"oc apply -f deploy-service.yaml",
"oc get -n metallb-system pods -l component=speaker",
"NAME READY STATUS RESTARTS AGE speaker-c6c5f 6/6 Running 0 69m",
"oc exec -n metallb-system <speaker_pod> -c frr -- vtysh -c \"show bgp vrf <vrf_name> neigh\"",
"BGP neighbor is 192.168.30.1, remote AS 200, local AS 100, external link BGP version 4, remote router ID 192.168.30.1, local router ID 192.168.30.71 BGP state = Established, up for 04:20:09",
"oc exec -n metallb-system <speaker_pod> -c frr -- vtysh -c \"show bgp vrf <vrf_name> ipv4\"",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-nodesel namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 nodeSelectors: - matchExpressions: - key: kubernetes.io/hostname operator: In values: [compute-1.example.com, compute-2.example.com]",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-peer-bfd namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 holdTime: \"10s\" bfdProfile: doc-example-bfd-profile-full",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv4 namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64500 myASN: 64500 --- apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv6 namespace: metallb-system spec: peerAddress: 2620:52:0:88::104 peerASN: 64500 myASN: 64500",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-community spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: Community metadata: name: community1 namespace: metallb-system spec: communities: - name: NO_ADVERTISE value: '65535:65282'",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-bgp-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10",
"oc apply -f bgppeer.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgp-community-sample namespace: metallb-system spec: aggregationLength: 32 aggregationLengthV6: 128 communities: - NO_ADVERTISE 1 ipAddressPools: - doc-example-bgp-community peers: - doc-example-peer",
"oc apply -f bgpadvertisement.yaml",
"apiVersion: metallb.io/v1beta1 kind: BFDProfile metadata: name: doc-example-bfd-profile-full namespace: metallb-system spec: receiveInterval: 300 transmitInterval: 300 detectMultiplier: 3 echoMode: false passiveMode: true minimumTtl: 254",
"oc apply -f bfdprofile.yaml",
"apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer loadBalancerIP: <ip_address>",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning AllocationFailed 3m16s metallb-controller Failed to allocate IP for \"default/invalid-request\": \"4.3.2.1\" is not allowed in config",
"apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer",
"apiVersion: v1 kind: Service metadata: name: <service_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer",
"apiVersion: v1 kind: Service metadata: name: service-http annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: \"web-server-svc\" 1 spec: ports: - name: http port: 80 2 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 3 type: LoadBalancer loadBalancerIP: 172.31.249.7 4 --- apiVersion: v1 kind: Service metadata: name: service-https annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: \"web-server-svc\" spec: ports: - name: https port: 443 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> type: LoadBalancer loadBalancerIP: 172.31.249.7",
"oc apply -f <service_name>.yaml",
"service/<service_name> created",
"oc describe service <service_name>",
"Name: <service_name> Namespace: default Labels: <none> Annotations: metallb.universe.tf/address-pool: doc-example 1 Selector: app=service_name Type: LoadBalancer 2 IP Family Policy: SingleStack IP Families: IPv4 IP: 10.105.237.254 IPs: 10.105.237.254 LoadBalancer Ingress: 192.168.100.5 3 Port: <unset> 80/TCP TargetPort: 8080/TCP NodePort: <unset> 30550/TCP Endpoints: 10.244.0.50:8080 Session Affinity: None External Traffic Policy: Cluster Events: 4 Type Reason Age From Message ---- ------ ---- ---- ------- Normal nodeAssigned 32m (x2 over 32m) metallb-speaker announcing from node \"<node_name>\"",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vrfpolicy 1 spec: nodeSelector: vrf: \"true\" 2 maxUnavailable: 3 desiredState: interfaces: - name: ens4vrf 3 type: vrf 4 state: up vrf: port: - ens4 5 route-table-id: 2 6 - name: ens4 7 type: ethernet state: up ipv4: address: - ip: 192.168.130.130 prefix-length: 24 dhcp: false enabled: true routes: 8 config: - destination: 0.0.0.0/0 metric: 150 next-hop-address: 192.168.130.1 next-hop-interface: ens4 table-id: 2 route-rules: 9 config: - ip-to: 172.30.0.0/16 priority: 998 route-table: 254 10 - ip-to: 10.132.0.0/14 priority: 998 route-table: 254",
"oc apply -f node-network-vrf.yaml",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: frrviavrf namespace: metallb-system spec: myASN: 100 peerASN: 200 peerAddress: 192.168.130.1 vrf: ens4vrf 1",
"oc apply -f frr-via-vrf.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.169.10.0/32",
"oc apply -f first-pool.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: first-adv namespace: metallb-system spec: ipAddressPools: - first-pool peers: - frrviavrf 1 nodeSelectors: - matchLabels: egress-service.k8s.ovn.org/test-server1: \"\" 2",
"oc apply -f first-adv.yaml",
"apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: server1 1 namespace: test 2 spec: sourceIPBy: \"LoadBalancerIP\" 3 nodeSelector: matchLabels: vrf: \"true\" 4 network: \"2\" 5",
"oc apply -f egress-service.yaml",
"curl <external_ip_address>:<port_number> 1",
"apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug nodeSelector: node-role.kubernetes.io/worker: \"\"",
"oc replace -f setdebugloglevel.yaml",
"oc get -n metallb-system pods -l component=speaker",
"NAME READY STATUS RESTARTS AGE speaker-2m9pm 4/4 Running 0 9m19s speaker-7m4qw 3/4 Running 0 19s speaker-szlmx 4/4 Running 0 9m19s",
"oc logs -n metallb-system speaker-7m4qw -c speaker",
"{\"branch\":\"main\",\"caller\":\"main.go:92\",\"commit\":\"3d052535\",\"goversion\":\"gc / go1.17.1 / amd64\",\"level\":\"info\",\"msg\":\"MetalLB speaker starting (commit 3d052535, branch main)\",\"ts\":\"2022-05-17T09:55:05Z\",\"version\":\"\"} {\"caller\":\"announcer.go:110\",\"event\":\"createARPResponder\",\"interface\":\"ens4\",\"level\":\"info\",\"msg\":\"created ARP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:119\",\"event\":\"createNDPResponder\",\"interface\":\"ens4\",\"level\":\"info\",\"msg\":\"created NDP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:110\",\"event\":\"createARPResponder\",\"interface\":\"tun0\",\"level\":\"info\",\"msg\":\"created ARP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:119\",\"event\":\"createNDPResponder\",\"interface\":\"tun0\",\"level\":\"info\",\"msg\":\"created NDP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} I0517 09:55:06.515686 95 request.go:665] Waited for 1.026500832s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/operators.coreos.com/v1alpha1?timeout=32s {\"Starting Manager\":\"(MISSING)\",\"caller\":\"k8s.go:389\",\"level\":\"info\",\"ts\":\"2022-05-17T09:55:08Z\"} {\"caller\":\"speakerlist.go:310\",\"level\":\"info\",\"msg\":\"node event - forcing sync\",\"node addr\":\"10.0.128.4\",\"node event\":\"NodeJoin\",\"node name\":\"ci-ln-qb8t3mb-72292-7s7rh-worker-a-vvznj\",\"ts\":\"2022-05-17T09:55:08Z\"} {\"caller\":\"service_controller.go:113\",\"controller\":\"ServiceReconciler\",\"enqueueing\":\"openshift-kube-controller-manager-operator/metrics\",\"epslice\":\"{\\\"metadata\\\":{\\\"name\\\":\\\"metrics-xtsxr\\\",\\\"generateName\\\":\\\"metrics-\\\",\\\"namespace\\\":\\\"openshift-kube-controller-manager-operator\\\",\\\"uid\\\":\\\"ac6766d7-8504-492c-9d1e-4ae8897990ad\\\",\\\"resourceVersion\\\":\\\"9041\\\",\\\"generation\\\":4,\\\"creationTimestamp\\\":\\\"2022-05-17T07:16:53Z\\\",\\\"labels\\\":{\\\"app\\\":\\\"kube-controller-manager-operator\\\",\\\"endpointslice.kubernetes.io/managed-by\\\":\\\"endpointslice-controller.k8s.io\\\",\\\"kubernetes.io/service-name\\\":\\\"metrics\\\"},\\\"annotations\\\":{\\\"endpoints.kubernetes.io/last-change-trigger-time\\\":\\\"2022-05-17T07:21:34Z\\\"},\\\"ownerReferences\\\":[{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Service\\\",\\\"name\\\":\\\"metrics\\\",\\\"uid\\\":\\\"0518eed3-6152-42be-b566-0bd00a60faf8\\\",\\\"controller\\\":true,\\\"blockOwnerDeletion\\\":true}],\\\"managedFields\\\":[{\\\"manager\\\":\\\"kube-controller-manager\\\",\\\"operation\\\":\\\"Update\\\",\\\"apiVersion\\\":\\\"discovery.k8s.io/v1\\\",\\\"time\\\":\\\"2022-05-17T07:20:02Z\\\",\\\"fieldsType\\\":\\\"FieldsV1\\\",\\\"fieldsV1\\\":{\\\"f:addressType\\\":{},\\\"f:endpoints\\\":{},\\\"f:metadata\\\":{\\\"f:annotations\\\":{\\\".\\\":{},\\\"f:endpoints.kubernetes.io/last-change-trigger-time\\\":{}},\\\"f:generateName\\\":{},\\\"f:labels\\\":{\\\".\\\":{},\\\"f:app\\\":{},\\\"f:endpointslice.kubernetes.io/managed-by\\\":{},\\\"f:kubernetes.io/service-name\\\":{}},\\\"f:ownerReferences\\\":{\\\".\\\":{},\\\"k:{\\\\\\\"uid\\\\\\\":\\\\\\\"0518eed3-6152-42be-b566-0bd00a60faf8\\\\\\\"}\\\":{}}},\\\"f:ports\\\":{}}}]},\\\"addressType\\\":\\\"IPv4\\\",\\\"endpoints\\\":[{\\\"addresses\\\":[\\\"10.129.0.7\\\"],\\\"conditions\\\":{\\\"ready\\\":true,\\\"serving\\\":true,\\\"terminating\\\":false},\\\"targetRef\\\":{\\\"kind\\\":\\\"Pod\\\",\\\"namespace\\\":\\\"openshift-kube-controller-manager-operator\\\",\\\"name\\\":\\\"kube-controller-manager-operator-6b98b89ddd-8d4nf\\\",\\\"uid\\\":\\\"dd5139b8-e41c-4946-a31b-1a629314e844\\\",\\\"resourceVersion\\\":\\\"9038\\\"},\\\"nodeName\\\":\\\"ci-ln-qb8t3mb-72292-7s7rh-master-0\\\",\\\"zone\\\":\\\"us-central1-a\\\"}],\\\"ports\\\":[{\\\"name\\\":\\\"https\\\",\\\"protocol\\\":\\\"TCP\\\",\\\"port\\\":8443}]}\",\"level\":\"debug\",\"ts\":\"2022-05-17T09:55:08Z\"}",
"oc logs -n metallb-system speaker-7m4qw -c frr",
"Started watchfrr 2022/05/17 09:55:05 ZEBRA: client 16 says hello and bids fair to announce only bgp routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 31 says hello and bids fair to announce only vnc routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 38 says hello and bids fair to announce only static routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 43 says hello and bids fair to announce only bfd routes vrf=0 2022/05/17 09:57:25.089 BGP: Creating Default VRF, AS 64500 2022/05/17 09:57:25.090 BGP: dup addr detect enable max_moves 5 time 180 freeze disable freeze_time 0 2022/05/17 09:57:25.090 BGP: bgp_get: Registering BGP instance (null) to zebra 2022/05/17 09:57:25.090 BGP: Registering VRF 0 2022/05/17 09:57:25.091 BGP: Rx Router Id update VRF 0 Id 10.131.0.1/32 2022/05/17 09:57:25.091 BGP: RID change : vrf VRF default(0), RTR ID 10.131.0.1 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF br0 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF ens4 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF ens4 addr 10.0.128.4/32 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF ens4 addr fe80::c9d:84da:4d86:5618/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF lo 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF ovs-system 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF tun0 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF tun0 addr 10.131.0.1/23 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF tun0 addr fe80::40f1:d1ff:feb6:5322/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth2da49fed 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth2da49fed addr fe80::24bd:d1ff:fec1:d88/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth2fa08c8c 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth2fa08c8c addr fe80::6870:ff:fe96:efc8/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth41e356b7 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth41e356b7 addr fe80::48ff:37ff:fede:eb4b/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth1295c6e2 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth1295c6e2 addr fe80::b827:a2ff:feed:637/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth9733c6dc 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth9733c6dc addr fe80::3cf4:15ff:fe11:e541/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth336680ea 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth336680ea addr fe80::94b1:8bff:fe7e:488c/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vetha0a907b7 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vetha0a907b7 addr fe80::3855:a6ff:fe73:46c3/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vethf35a4398 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vethf35a4398 addr fe80::40ef:2fff:fe57:4c4d/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vethf831b7f4 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vethf831b7f4 addr fe80::f0d9:89ff:fe7c:1d32/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vxlan_sys_4789 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vxlan_sys_4789 addr fe80::80c1:82ff:fe4b:f078/64 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] Timer (start timer expire). 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] BGP_Start (Idle->Connect), fd -1 2022/05/17 09:57:26.094 BGP: Allocated bnc 10.0.0.1/32(0)(VRF default) peer 0x7f807f7631a0 2022/05/17 09:57:26.094 BGP: sendmsg_zebra_rnh: sending cmd ZEBRA_NEXTHOP_REGISTER for 10.0.0.1/32 (vrf VRF default) 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] Waiting for NHT 2022/05/17 09:57:26.094 BGP: bgp_fsm_change_status : vrf default(0), Status: Connect established_peers 0 2022/05/17 09:57:26.094 BGP: 10.0.0.1 went from Idle to Connect 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] TCP_connection_open_failed (Connect->Active), fd -1 2022/05/17 09:57:26.094 BGP: bgp_fsm_change_status : vrf default(0), Status: Active established_peers 0 2022/05/17 09:57:26.094 BGP: 10.0.0.1 went from Connect to Active 2022/05/17 09:57:26.094 ZEBRA: rnh_register msg from client bgp: hdr->length=8, type=nexthop vrf=0 2022/05/17 09:57:26.094 ZEBRA: 0: Add RNH 10.0.0.1/32 type Nexthop 2022/05/17 09:57:26.094 ZEBRA: 0:10.0.0.1/32: Evaluate RNH, type Nexthop (force) 2022/05/17 09:57:26.094 ZEBRA: 0:10.0.0.1/32: NH has become unresolved 2022/05/17 09:57:26.094 ZEBRA: 0: Client bgp registers for RNH 10.0.0.1/32 type Nexthop 2022/05/17 09:57:26.094 BGP: VRF default(0): Rcvd NH update 10.0.0.1/32(0) - metric 0/0 #nhops 0/0 flags 0x6 2022/05/17 09:57:26.094 BGP: NH update for 10.0.0.1/32(0)(VRF default) - flags 0x6 chgflags 0x0 - evaluate paths 2022/05/17 09:57:26.094 BGP: evaluate_paths: Updating peer (10.0.0.1(VRF default)) status with NHT 2022/05/17 09:57:30.081 ZEBRA: Event driven route-map update triggered 2022/05/17 09:57:30.081 ZEBRA: Event handler for route-map: 10.0.0.1-out 2022/05/17 09:57:30.081 ZEBRA: Event handler for route-map: 10.0.0.1-in 2022/05/17 09:57:31.104 ZEBRA: netlink_parse_info: netlink-listen (NS 0) type RTM_NEWNEIGH(28), len=76, seq=0, pid=0 2022/05/17 09:57:31.104 ZEBRA: Neighbor Entry received is not on a VLAN or a BRIDGE, ignoring 2022/05/17 09:57:31.105 ZEBRA: netlink_parse_info: netlink-listen (NS 0) type RTM_NEWNEIGH(28), len=76, seq=0, pid=0 2022/05/17 09:57:31.105 ZEBRA: Neighbor Entry received is not on a VLAN or a BRIDGE, ignoring",
"oc get -n metallb-system pods -l component=speaker",
"NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 56m speaker-gvfnf 4/4 Running 0 56m",
"oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show running-config\"",
"Building configuration Current configuration: ! frr version 7.5.1_git frr defaults traditional hostname some-hostname log file /etc/frr/frr.log informational log timestamp precision 3 service integrated-vtysh-config ! router bgp 64500 1 bgp router-id 10.0.1.2 no bgp ebgp-requires-policy no bgp default ipv4-unicast no bgp network import-check neighbor 10.0.2.3 remote-as 64500 2 neighbor 10.0.2.3 bfd profile doc-example-bfd-profile-full 3 neighbor 10.0.2.3 timers 5 15 neighbor 10.0.2.4 remote-as 64500 neighbor 10.0.2.4 bfd profile doc-example-bfd-profile-full neighbor 10.0.2.4 timers 5 15 ! address-family ipv4 unicast network 203.0.113.200/30 4 neighbor 10.0.2.3 activate neighbor 10.0.2.3 route-map 10.0.2.3-in in neighbor 10.0.2.4 activate neighbor 10.0.2.4 route-map 10.0.2.4-in in exit-address-family ! address-family ipv6 unicast network fc00:f853:ccd:e799::/124 neighbor 10.0.2.3 activate neighbor 10.0.2.3 route-map 10.0.2.3-in in neighbor 10.0.2.4 activate neighbor 10.0.2.4 route-map 10.0.2.4-in in exit-address-family ! route-map 10.0.2.3-in deny 20 ! route-map 10.0.2.4-in deny 20 ! ip nht resolve-via-default ! ipv6 nht resolve-via-default ! line vty ! bfd profile doc-example-bfd-profile-full transmit-interval 35 receive-interval 35 passive-mode echo-mode echo-interval 35 minimum-ttl 10 ! ! end",
"oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bgp summary\"",
"IPv4 Unicast Summary: BGP router identifier 10.0.1.2, local AS number 64500 vrf-id 0 BGP table version 1 RIB entries 1, using 192 bytes of memory Peers 2, using 29 KiB of memory Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt 10.0.2.3 4 64500 387 389 0 0 0 00:32:02 0 1 1 10.0.2.4 4 64500 0 0 0 0 0 never Active 0 2 Total number of neighbors 2 IPv6 Unicast Summary: BGP router identifier 10.0.1.2, local AS number 64500 vrf-id 0 BGP table version 1 RIB entries 1, using 192 bytes of memory Peers 2, using 29 KiB of memory Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt 10.0.2.3 4 64500 387 389 0 0 0 00:32:02 NoNeg 10.0.2.4 4 64500 0 0 0 0 0 never Active 0 Total number of neighbors 2",
"oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bgp ipv4 unicast 203.0.113.200/30\"",
"BGP routing table entry for 203.0.113.200/30 Paths: (1 available, best #1, table default) Advertised to non peer-group peers: 10.0.2.3 1 Local 0.0.0.0 from 0.0.0.0 (10.0.1.2) Origin IGP, metric 0, weight 32768, valid, sourced, local, best (First path received) Last update: Mon Jan 10 19:49:07 2022",
"oc get -n metallb-system pods -l component=speaker",
"NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 26m speaker-gvfnf 4/4 Running 0 26m",
"oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bfd peers brief\"",
"Session count: 2 SessionId LocalAddress PeerAddress Status ========= ============ =========== ====== 3909139637 10.0.1.2 10.0.2.3 up <.>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/networking/load-balancing-with-metallb |
Chapter 9. Upgrading RHACS Cloud Service | Chapter 9. Upgrading RHACS Cloud Service 9.1. Upgrading secured clusters in RHACS Cloud Service by using the Operator Red Hat provides regular service updates for the components that it manages, including Central services. These service updates include upgrades to new versions of Red Hat Advanced Cluster Security Cloud Service. You must regularly upgrade the version of RHACS on your secured clusters to ensure compatibility with RHACS Cloud Service. 9.1.1. Preparing to upgrade Before you upgrade the Red Hat Advanced Cluster Security for Kubernetes (RHACS) version, complete the following steps: If the cluster you are upgrading contains the SecuredCluster custom resource (CR), change the collection method to CORE_BPF . For more information, see "Changing the collection method". 9.1.1.1. Changing the collection method If the cluster that you are upgrading contains the SecuredCluster CR, you must ensure that the per node collection setting is set to CORE_BPF before you upgrade. Procedure In the OpenShift Container Platform web console, go to the RHACS Operator page. In the top navigation menu, select Secured Cluster . Click the instance name, for example, stackrox-secured-cluster-services . Use one of the following methods to change the setting: In the Form view , under Per Node Settings Collector Settings Collection , select CORE_BPF . Click YAML to open the YAML editor and locate the spec.perNode.collector.collection attribute. If the value is KernelModule or EBPF , then change it to CORE_BPF . Click Save. Additional resources Updating installed Operators 9.1.2. Rolling back an Operator upgrade for secured clusters To roll back an Operator upgrade, you can use either the CLI or the OpenShift Container Platform web console. Note On secured clusters, rolling back Operator upgrades is needed only in rare cases, for example, if an issue exists with the secured cluster. 9.1.2.1. Rolling back an Operator upgrade by using the CLI You can roll back the Operator version by using CLI commands. Procedure Delete the OLM subscription by running the following command: For OpenShift Container Platform, run the following command: USD oc -n rhacs-operator delete subscription rhacs-operator For Kubernetes, run the following command: USD kubectl -n rhacs-operator delete subscription rhacs-operator Delete the cluster service version (CSV) by running the following command: For OpenShift Container Platform, run the following command: USD oc -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator For Kubernetes, run the following command: USD kubectl -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator Install the latest version of the Operator on the rolled back channel. 9.1.2.2. Rolling back an Operator upgrade by using the web console You can roll back the Operator version by using the OpenShift Container Platform web console. Prerequisites You have access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions. Procedure Go to the Operators Installed Operators page. Click the RHACS Operator. On the Operator Details page, select Uninstall Operator from the Actions list. Following this action, the Operator stops running and no longer receives updates. Install the latest version of the Operator on the rolled back channel. Additional resources Operator Lifecycle Manager workflow Manually approving a pending Operator update 9.1.3. Troubleshooting Operator upgrade issues Follow these instructions to investigate and resolve upgrade-related issues for the RHACS Operator. 9.1.3.1. Central or Secured cluster fails to deploy When RHACS Operator has the following conditions, you must check the custom resource conditions to find the issue: If the Operator fails to deploy Secured Cluster If the Operator fails to apply CR changes to actual resources For Secured clusters, run the following command to check the conditions: USD oc -n rhacs-operator describe securedclusters.platform.stackrox.io 1 1 If you use Kubernetes, enter kubectl instead of oc . You can identify configuration errors from the conditions output: Example output Conditions: Last Transition Time: 2023-04-19T10:49:57Z Status: False Type: Deployed Last Transition Time: 2023-04-19T10:49:57Z Status: True Type: Initialized Last Transition Time: 2023-04-19T10:59:10Z Message: Deployment.apps "central" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: "50": must be less than or equal to cpu limit Reason: ReconcileError Status: True Type: Irreconcilable Last Transition Time: 2023-04-19T10:49:57Z Message: No proxy configuration is desired Reason: NoProxyConfig Status: False Type: ProxyConfigFailed Last Transition Time: 2023-04-19T10:49:57Z Message: Deployment.apps "central" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: "50": must be less than or equal to cpu limit Reason: InstallError Status: True Type: ReleaseFailed Additionally, you can view RHACS pod logs to find more information about the issue. Run the following command to view the logs: oc -n rhacs-operator logs deploy/rhacs-operator-controller-manager manager 1 1 If you use Kubernetes, enter kubectl instead of oc . 9.2. Upgrading secured clusters in RHACS Cloud Service by using Helm charts You can upgrade your secured clusters in RHACS Cloud Service by using Helm charts. If you installed RHACS secured clusters by using Helm charts, you can upgrade to the latest version of RHACS by updating the Helm chart and running the helm upgrade command. 9.2.1. Updating the Helm chart repository You must always update Helm charts before upgrading to a new version of Red Hat Advanced Cluster Security for Kubernetes. Prerequisites You must have already added the Red Hat Advanced Cluster Security for Kubernetes Helm chart repository. You must be using Helm version 3.8.3 or newer. Procedure Update Red Hat Advanced Cluster Security for Kubernetes charts repository. USD helm repo update Verification Run the following command to verify the added chart repository: USD helm search repo -l rhacs/ 9.2.2. Running the Helm upgrade command You can use the helm upgrade command to update Red Hat Advanced Cluster Security for Kubernetes (RHACS). Prerequisites You must have access to the values-private.yaml configuration file that you have used to install Red Hat Advanced Cluster Security for Kubernetes (RHACS). Otherwise, you must generate the values-private.yaml configuration file containing root certificates before proceeding with these commands. Procedure Run the helm upgrade command and specify the configuration files by using the -f option: USD helm upgrade -n stackrox stackrox-secured-cluster-services \ rhacs/secured-cluster-services --version <current-rhacs-version> \ 1 -f values-private.yaml 1 Use the -f option to specify the paths for your YAML configuration files. 9.2.3. Additional resources Installing RHACS Cloud Service on secured clusters by using Helm charts 9.3. Manually upgrading secured clusters in RHACS Cloud Service by using the roxctl CLI You can upgrade your secured clusters in RHACS Cloud Service by using the roxctl CLI. Important You need to manually upgrade secured clusters only if you used the roxctl CLI to install the secured clusters. 9.3.1. Upgrading the roxctl CLI To upgrade the roxctl CLI to the latest version, you must uninstall your current version of the roxctl CLI and then install the latest version of the roxctl CLI. 9.3.1.1. Uninstalling the roxctl CLI You can uninstall the roxctl CLI binary on Linux by using the following procedure. Procedure Find and delete the roxctl binary: USD ROXPATH=USD(which roxctl) && rm -f USDROXPATH 1 1 Depending on your environment, you might need administrator rights to delete the roxctl binary. 9.3.1.2. Installing the roxctl CLI on Linux You can install the roxctl CLI binary on Linux by using the following procedure. Note roxctl CLI for Linux is available for amd64 , arm64 , ppc64le , and s390x architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.7.0/bin/Linux/roxctlUSD{arch}" Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 9.3.1.3. Installing the roxctl CLI on macOS You can install the roxctl CLI binary on macOS by using the following procedure. Note roxctl CLI for macOS is available for amd64 and arm64 architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.7.0/bin/Darwin/roxctlUSD{arch}" Remove all extended attributes from the binary: USD xattr -c roxctl Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 9.3.1.4. Installing the roxctl CLI on Windows You can install the roxctl CLI binary on Windows by using the following procedure. Note roxctl CLI for Windows is available for the amd64 architecture. Procedure Download the roxctl CLI: USD curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.7.0/bin/Windows/roxctl.exe Verification Verify the roxctl version you have installed: USD roxctl version 9.3.2. Upgrading all secured clusters manually Important To ensure optimal functionality, use the same RHACS version for your secured clusters that RHACS Cloud Service is running. If you are using automatic upgrades, update all your secured clusters by using automatic upgrades. If you are not using automatic upgrades, complete the instructions in this section on all secured clusters. To complete manual upgrades of each secured cluster running Sensor, Collector, and Admission controller, follow these instructions. 9.3.2.1. Updating other images You must update the sensor, collector and compliance images on each secured cluster when not using automatic upgrades. Note If you are using Kubernetes, use kubectl instead of oc for the commands listed in this procedure. Procedure Update the Sensor image: USD oc -n stackrox set image deploy/sensor sensor=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.7.0 1 1 If you use Kubernetes, enter kubectl instead of oc . Update the Compliance image: USD oc -n stackrox set image ds/collector compliance=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.7.0 1 1 If you use Kubernetes, enter kubectl instead of oc . Update the Collector image: USD oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-rhel8:4.7.0 1 1 If you use Kubernetes, enter kubectl instead of oc . Update the admission control image: USD oc -n stackrox set image deploy/admission-control admission-control=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.7.0 Important If you have installed RHACS on Red Hat OpenShift by using the roxctl CLI, you need to migrate the security context constraints (SCCs). For more information, see "Migrating SCCs during the manual upgrade" in the "Additional resources" section. Additional resources Authenticating by using the roxctl CLI 9.3.2.2. Migrating SCCs during the manual upgrade By migrating the security context constraints (SCCs) during the manual upgrade by using roxctl CLI, you can seamlessly transition the Red Hat Advanced Cluster Security for Kubernetes (RHACS) services to use the Red Hat OpenShift SCCs, ensuring compatibility and optimal security configurations across Central and all secured clusters. Procedure List all of the RHACS services that are deployed on all secured clusters: USD oc -n stackrox describe pods | grep 'openshift.io/scc\|^Name:' Example output Name: admission-control-6f4dcc6b4c-2phwd openshift.io/scc: stackrox-admission-control #... Name: central-575487bfcb-sjdx8 openshift.io/scc: stackrox-central Name: central-db-7c7885bb-6bgbd openshift.io/scc: stackrox-central-db Name: collector-56nkr openshift.io/scc: stackrox-collector #... Name: scanner-68fc55b599-f2wm6 openshift.io/scc: stackrox-scanner Name: scanner-68fc55b599-fztlh #... Name: sensor-84545f86b7-xgdwf openshift.io/scc: stackrox-sensor #... In this example, you can see that each pod has its own custom SCC, which is specified through the openshift.io/scc field. Add the required roles and role bindings to use the Red Hat OpenShift SCCs instead of the RHACS custom SCCs. To add the required roles and role bindings to use the Red Hat OpenShift SCCs for all secured clusters, complete the following steps: Create a file named upgrade-scs.yaml that defines the role and role binding resources by using the following content: Example 9.1. Example YAML file apiVersion: rbac.authorization.k8s.io/v1 kind: Role 1 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: use-privileged-scc 2 namespace: stackrox 3 rules: 4 - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding 5 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: collector-use-scc 6 namespace: stackrox roleRef: 7 apiGroup: rbac.authorization.k8s.io kind: Role name: use-privileged-scc subjects: 8 - kind: ServiceAccount name: collector namespace: stackrox - - - 1 The type of Kubernetes resource, in this example, Role . 2 The name of the role resource. 3 The namespace in which the role is created. 4 Describes the permissions granted by the role resource. 5 The type of Kubernetes resource, in this example, RoleBinding . 6 The name of the role binding resource. 7 Specifies the role to bind in the same namespace. 8 Specifies the subjects that are bound to the role. Create the role and role binding resources specified in the upgrade-scs.yaml file by running the following command: USD oc -n stackrox create -f ./update-scs.yaml Important You must run this command on each secured cluster to create the role and role bindings specified in the upgrade-scs.yaml file. Delete the SCCs that are specific to RHACS: To delete the SCCs that are specific to all secured clusters, run the following command: USD oc delete scc/stackrox-admission-control scc/stackrox-collector scc/stackrox-sensor Important You must run this command on each secured cluster to delete the SCCs that are specific to each secured cluster. Verification Ensure that all the pods are using the correct SCCs by running the following command: USD oc -n stackrox describe pods | grep 'openshift.io/scc\|^Name:' Compare the output with the following table: Component custom SCC New Red Hat OpenShift 4 SCC Central stackrox-central nonroot-v2 Central-db stackrox-central-db nonroot-v2 Scanner stackrox-scanner nonroot-v2 Scanner-db stackrox-scanner nonroot-v2 Admission Controller stackrox-admission-control restricted-v2 Collector stackrox-collector privileged Sensor stackrox-sensor restricted-v2 9.3.2.2.1. Editing the GOMEMLIMIT environment variable for the Sensor deployment Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment. Procedure Run the following command to edit the variable for the Sensor deployment: USD oc -n stackrox edit deploy/sensor 1 1 If you use Kubernetes, enter kubectl instead of oc . Replace the GOMEMLIMIT variable with ROX_MEMLIMIT . Save the file. 9.3.2.2.2. Editing the GOMEMLIMIT environment variable for the Collector deployment Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment. Procedure Run the following command to edit the variable for the Collector deployment: USD oc -n stackrox edit deploy/collector 1 1 If you use Kubernetes, enter kubectl instead of oc . Replace the GOMEMLIMIT variable with ROX_MEMLIMIT . Save the file. 9.3.2.2.3. Editing the GOMEMLIMIT environment variable for the Admission Controller deployment Upgrading to version 4.4 requires that you manually replace the GOMEMLIMIT environment variable with the ROX_MEMLIMIT environment variable. You must edit this variable for each deployment. Procedure Run the following command to edit the variable for the Admission Controller deployment: USD oc -n stackrox edit deploy/admission-control 1 1 If you use Kubernetes, enter kubectl instead of oc . Replace the GOMEMLIMIT variable with ROX_MEMLIMIT . Save the file. 9.3.2.2.4. Verifying secured cluster upgrade After you have upgraded secured clusters, verify that the updated pods are working. Procedure Check that the new pods have deployed: USD oc get deploy,ds -n stackrox -o wide 1 1 If you use Kubernetes, enter kubectl instead of oc . USD oc get pod -n stackrox --watch 1 1 If you use Kubernetes, enter kubectl instead of oc . 9.3.3. Enabling RHCOS node scanning with the StackRox Scanner If you use OpenShift Container Platform, you can enable scanning of Red Hat Enterprise Linux CoreOS (RHCOS) nodes for vulnerabilities by using Red Hat Advanced Cluster Security for Kubernetes (RHACS). Prerequisites For scanning RHCOS node hosts of the secured cluster, you must have installed Secured Cluster services on OpenShift Container Platform 4.12 or later. For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix . For life cycle support information for RHACS, see the Red Hat Advanced Cluster Security for Kubernetes Support Policy . This procedure describes how to enable node scanning for the first time. If you are reconfiguring Red Hat Advanced Cluster Security for Kubernetes to use the StackRox Scanner instead of Scanner V4, follow the procedure in "Restoring RHCOS node scanning with the StackRox Scanner". Procedure Run one of the following commands to update the compliance container. For a default compliance container with metrics disabled, run the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":"disabled"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}' For a compliance container with Prometheus metrics enabled, run the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":":9091"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}' Update the Collector DaemonSet (DS) by taking the following steps: Add new volume mounts to Collector DS by running the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"volumes":[{"name":"tmp-volume","emptyDir":{}},{"name":"cache-volume","emptyDir":{"sizeLimit":"200Mi"}}]}}}}' Add the new NodeScanner container by running the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"command":["/scanner","--nodeinventory","--config=",""],"env":[{"name":"ROX_NODE_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"spec.nodeName"}}},{"name":"ROX_CLAIR_V4_SCANNING","value":"true"},{"name":"ROX_COMPLIANCE_OPERATOR_INTEGRATION","value":"true"},{"name":"ROX_CSV_EXPORT","value":"false"},{"name":"ROX_DECLARATIVE_CONFIGURATION","value":"false"},{"name":"ROX_INTEGRATIONS_AS_CONFIG","value":"false"},{"name":"ROX_NETPOL_FIELDS","value":"true"},{"name":"ROX_NETWORK_DETECTION_BASELINE_SIMULATION","value":"true"},{"name":"ROX_NETWORK_GRAPH_PATTERNFLY","value":"true"},{"name":"ROX_NODE_SCANNING_CACHE_TIME","value":"3h36m"},{"name":"ROX_NODE_SCANNING_INITIAL_BACKOFF","value":"30s"},{"name":"ROX_NODE_SCANNING_MAX_BACKOFF","value":"5m"},{"name":"ROX_PROCESSES_LISTENING_ON_PORT","value":"false"},{"name":"ROX_QUAY_ROBOT_ACCOUNTS","value":"true"},{"name":"ROX_ROXCTL_NETPOL_GENERATE","value":"true"},{"name":"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS","value":"false"},{"name":"ROX_SYSLOG_EXTRA_FIELDS","value":"true"},{"name":"ROX_SYSTEM_HEALTH_PF","value":"false"},{"name":"ROX_VULN_MGMT_WORKLOAD_CVES","value":"false"}],"image":"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.7.0","imagePullPolicy":"IfNotPresent","name":"node-inventory","ports":[{"containerPort":8444,"name":"grpc","protocol":"TCP"}],"volumeMounts":[{"mountPath":"/host","name":"host-root-ro","readOnly":true},{"mountPath":"/tmp/","name":"tmp-volume"},{"mountPath":"/cache","name":"cache-volume"}]}]}}}}' Additional resources Scanning RHCOS node hosts | [
"oc -n rhacs-operator delete subscription rhacs-operator",
"kubectl -n rhacs-operator delete subscription rhacs-operator",
"oc -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator",
"kubectl -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator",
"oc -n rhacs-operator describe securedclusters.platform.stackrox.io 1",
"Conditions: Last Transition Time: 2023-04-19T10:49:57Z Status: False Type: Deployed Last Transition Time: 2023-04-19T10:49:57Z Status: True Type: Initialized Last Transition Time: 2023-04-19T10:59:10Z Message: Deployment.apps \"central\" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: \"50\": must be less than or equal to cpu limit Reason: ReconcileError Status: True Type: Irreconcilable Last Transition Time: 2023-04-19T10:49:57Z Message: No proxy configuration is desired Reason: NoProxyConfig Status: False Type: ProxyConfigFailed Last Transition Time: 2023-04-19T10:49:57Z Message: Deployment.apps \"central\" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: \"50\": must be less than or equal to cpu limit Reason: InstallError Status: True Type: ReleaseFailed",
"-n rhacs-operator logs deploy/rhacs-operator-controller-manager manager 1",
"helm repo update",
"helm search repo -l rhacs/",
"helm upgrade -n stackrox stackrox-secured-cluster-services rhacs/secured-cluster-services --version <current-rhacs-version> \\ 1 -f values-private.yaml",
"ROXPATH=USD(which roxctl) && rm -f USDROXPATH 1",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.7.0/bin/Linux/roxctlUSD{arch}\"",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.7.0/bin/Darwin/roxctlUSD{arch}\"",
"xattr -c roxctl",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.7.0/bin/Windows/roxctl.exe",
"roxctl version",
"oc -n stackrox set image deploy/sensor sensor=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.7.0 1",
"oc -n stackrox set image ds/collector compliance=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.7.0 1",
"oc -n stackrox set image ds/collector collector=registry.redhat.io/advanced-cluster-security/rhacs-collector-rhel8:4.7.0 1",
"oc -n stackrox set image deploy/admission-control admission-control=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.7.0",
"oc -n stackrox describe pods | grep 'openshift.io/scc\\|^Name:'",
"Name: admission-control-6f4dcc6b4c-2phwd openshift.io/scc: stackrox-admission-control # Name: central-575487bfcb-sjdx8 openshift.io/scc: stackrox-central Name: central-db-7c7885bb-6bgbd openshift.io/scc: stackrox-central-db Name: collector-56nkr openshift.io/scc: stackrox-collector # Name: scanner-68fc55b599-f2wm6 openshift.io/scc: stackrox-scanner Name: scanner-68fc55b599-fztlh # Name: sensor-84545f86b7-xgdwf openshift.io/scc: stackrox-sensor #",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role 1 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: use-privileged-scc 2 namespace: stackrox 3 rules: 4 - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use - - - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding 5 metadata: annotations: email: [email protected] owner: stackrox labels: app.kubernetes.io/component: collector app.kubernetes.io/instance: stackrox-secured-cluster-services app.kubernetes.io/name: stackrox app.kubernetes.io/part-of: stackrox-secured-cluster-services app.kubernetes.io/version: 4.4.0 auto-upgrade.stackrox.io/component: sensor name: collector-use-scc 6 namespace: stackrox roleRef: 7 apiGroup: rbac.authorization.k8s.io kind: Role name: use-privileged-scc subjects: 8 - kind: ServiceAccount name: collector namespace: stackrox - - -",
"oc -n stackrox create -f ./update-scs.yaml",
"oc delete scc/stackrox-admission-control scc/stackrox-collector scc/stackrox-sensor",
"oc -n stackrox describe pods | grep 'openshift.io/scc\\|^Name:'",
"oc -n stackrox edit deploy/sensor 1",
"oc -n stackrox edit deploy/collector 1",
"oc -n stackrox edit deploy/admission-control 1",
"oc get deploy,ds -n stackrox -o wide 1",
"oc get pod -n stackrox --watch 1",
"oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"compliance\",\"env\":[{\"name\":\"ROX_METRICS_PORT\",\"value\":\"disabled\"},{\"name\":\"ROX_NODE_SCANNING_ENDPOINT\",\"value\":\"127.0.0.1:8444\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL\",\"value\":\"4h\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL_DEVIATION\",\"value\":\"24m\"},{\"name\":\"ROX_NODE_SCANNING_MAX_INITIAL_WAIT\",\"value\":\"5m\"},{\"name\":\"ROX_RHCOS_NODE_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_CALL_NODE_INVENTORY_ENABLED\",\"value\":\"true\"}]}]}}}}'",
"oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"compliance\",\"env\":[{\"name\":\"ROX_METRICS_PORT\",\"value\":\":9091\"},{\"name\":\"ROX_NODE_SCANNING_ENDPOINT\",\"value\":\"127.0.0.1:8444\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL\",\"value\":\"4h\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL_DEVIATION\",\"value\":\"24m\"},{\"name\":\"ROX_NODE_SCANNING_MAX_INITIAL_WAIT\",\"value\":\"5m\"},{\"name\":\"ROX_RHCOS_NODE_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_CALL_NODE_INVENTORY_ENABLED\",\"value\":\"true\"}]}]}}}}'",
"oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"volumes\":[{\"name\":\"tmp-volume\",\"emptyDir\":{}},{\"name\":\"cache-volume\",\"emptyDir\":{\"sizeLimit\":\"200Mi\"}}]}}}}'",
"oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"command\":[\"/scanner\",\"--nodeinventory\",\"--config=\",\"\"],\"env\":[{\"name\":\"ROX_NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"apiVersion\":\"v1\",\"fieldPath\":\"spec.nodeName\"}}},{\"name\":\"ROX_CLAIR_V4_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_COMPLIANCE_OPERATOR_INTEGRATION\",\"value\":\"true\"},{\"name\":\"ROX_CSV_EXPORT\",\"value\":\"false\"},{\"name\":\"ROX_DECLARATIVE_CONFIGURATION\",\"value\":\"false\"},{\"name\":\"ROX_INTEGRATIONS_AS_CONFIG\",\"value\":\"false\"},{\"name\":\"ROX_NETPOL_FIELDS\",\"value\":\"true\"},{\"name\":\"ROX_NETWORK_DETECTION_BASELINE_SIMULATION\",\"value\":\"true\"},{\"name\":\"ROX_NETWORK_GRAPH_PATTERNFLY\",\"value\":\"true\"},{\"name\":\"ROX_NODE_SCANNING_CACHE_TIME\",\"value\":\"3h36m\"},{\"name\":\"ROX_NODE_SCANNING_INITIAL_BACKOFF\",\"value\":\"30s\"},{\"name\":\"ROX_NODE_SCANNING_MAX_BACKOFF\",\"value\":\"5m\"},{\"name\":\"ROX_PROCESSES_LISTENING_ON_PORT\",\"value\":\"false\"},{\"name\":\"ROX_QUAY_ROBOT_ACCOUNTS\",\"value\":\"true\"},{\"name\":\"ROX_ROXCTL_NETPOL_GENERATE\",\"value\":\"true\"},{\"name\":\"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS\",\"value\":\"false\"},{\"name\":\"ROX_SYSLOG_EXTRA_FIELDS\",\"value\":\"true\"},{\"name\":\"ROX_SYSTEM_HEALTH_PF\",\"value\":\"false\"},{\"name\":\"ROX_VULN_MGMT_WORKLOAD_CVES\",\"value\":\"false\"}],\"image\":\"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.7.0\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"node-inventory\",\"ports\":[{\"containerPort\":8444,\"name\":\"grpc\",\"protocol\":\"TCP\"}],\"volumeMounts\":[{\"mountPath\":\"/host\",\"name\":\"host-root-ro\",\"readOnly\":true},{\"mountPath\":\"/tmp/\",\"name\":\"tmp-volume\"},{\"mountPath\":\"/cache\",\"name\":\"cache-volume\"}]}]}}}}'"
]
| https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/rhacs_cloud_service/upgrading-rhacs-cloud-service |
Assessing and Reporting Malware Signatures on RHEL Systems | Assessing and Reporting Malware Signatures on RHEL Systems Red Hat Insights 1-latest Know when systems in your RHEL infrastructure are exposed to malware risks Red Hat Customer Content Services | [
"sudo dnf install yara",
"sudo yum install insights-client",
"sudo insights-client --test-connection",
"sudo insights-client --register",
"sudo insights-client --collector malware-detection",
"test_scan: false",
"sudo insights-client --collector malware-detection",
"scan_processes: true",
"sudo insights-client --collector malware-detection",
"FILESYSTEM_SCAN_ONLY=/etc,/tmp,/var/lib",
"sudo FILESYSTEM_SCAN_ONLY=/etc,/tmp,/var/lib TEST_SCAN=false insights-client --collector malware-detection"
]
| https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html-single/assessing_and_reporting_malware_signatures_on_rhel_systems/index |
5.4. Virtual Memory | 5.4. Virtual Memory 5.4.1. Hot Plugging Virtual Memory You can hot plug virtual memory. Hot plugging means enabling or disabling devices while a virtual machine is running. Each time memory is hot plugged, it appears as a new memory device in the Vm Devices tab in the details view of the virtual machine, up to a maximum of 16 available slots. When the virtual machine is restarted, these devices are cleared from the Vm Devices tab without reducing the virtual machine's memory, allowing you to hot plug more memory devices. If the hot plug fails (for example, if there are no more available slots), the memory increase will be applied when the virtual machine is restarted. Important This feature is currently not supported for the self-hosted engine Manager virtual machine. Hot Plugging Virtual Memory Click Compute Virtual Machines and select a running virtual machine. Click Edit . Click the System tab. Increase the Memory Size by entering the total amount required. Memory can be added in multiples of 256 MB. By default, the maximum memory allowed for the virtual machine is set to 4x the memory size specified. Though the value is changed in the user interface, the maximum value is not hot plugged, and you will see the pending changes icon ( ). To avoid that, you can change the maximum memory back to the original value. Click OK . This action opens the Pending Virtual Machine changes window, as some values such as maxMemorySizeMb and minAllocatedMem will not change until the virtual machine is restarted. However, the hot plug action is triggered by the change to the Memory Size value, which can be applied immediately. Click OK . The virtual machine's Defined Memory is updated in the General tab in the details view. You can see the newly added memory device in the Vm Devices tab in the details view. 5.4.2. Hot Unplugging Virtual Memory You can hot unplug virtual memory. Hot unplugging means disabling devices while a virtual machine is running. Important Only memory added with hot plugging can be hot unplugged. The virtual machine operating system must support memory hot unplugging. The virtual machines must not have a memory balloon device enabled. This feature is disabled by default. All blocks of the hot-plugged memory must be set to online_movable in the virtual machine's device management rules. In virtual machines running up-to-date versions of Red Hat Enterprise Linux or CoreOS, this rule is set by default. For information on device management rules, consult the documentation for the virtual machine's operating system. If any of these conditions are not met, the memory hot unplug action may fail or cause unexpected behavior. Hot Unplugging Virtual Memory Click Compute Virtual Machines and select a running virtual machine. Click the Vm Devices tab. In the Hot Unplug column, click Hot Unplug beside the memory device to be removed. Click OK in the Memory Hot Unplug window. The Physical Memory Guaranteed value for the virtual machine is decremented automatically if necessary. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/sect-virtual_memory |
Chapter 3. Migrating to automation execution environments | Chapter 3. Migrating to automation execution environments 3.1. Why upgrade to automation execution environments? Red Hat Ansible Automation Platform 2.4 introduces automation execution environments. Automation execution environments are container images that allow for easier administration of Ansible by including everything needed to run Ansible automation within a single container. Automation execution environments include: RHEL UBI 8 Ansible-core 2.14 or later Python 3.9 or later. Any Ansible Content Collections Collection python or binary dependencies By including these elements, Ansible provides platform administrators a standardized way to define, build, and distribute the environments the automation runs in. Due to the new automation execution environment, it is no longer necessary for administrators to create custom plugins and automation content. Administrators can now spin up smaller automation execution environments in less time to create their content. All custom dependencies are now defined in the development phase instead of the administration and deployment phase. Decoupling from the control plane enables faster development cycles, scalability, reliability, and portability across environments. Automation execution environments enables the Ansible Automation Platform to move to a distributed architecture allowing administrators to scale automation across their organization. 3.2. About migrating legacy venvs to automation execution environments When upgrading from older versions of automation controller to version 4.0 or later, the controller can detect versions of virtual environments associated with Organizations, Inventory and Job Templates and informs you to migrate to the new automation execution environments model. A new installation of automation controller creates two virtualenvs during the installation; one runs the controller and the other runs Ansible. Like legacy virtual environments, automation execution environments allow the controller to run in a stable environment, while allowing you to add or update modules to your automation execution environments as necessary to run your playbooks. You can duplicate your setup in an automation execution environment from a custom virtual environment by migrating them to the new automation execution environment. Use the awx-manage commands in this section to: list of all the current custom virtual environments and their paths ( list_custom_venvs ) view the resources that rely a particular custom virtual environment ( custom_venv_associations ) export a particular custom virtual environment to a format that can be used to migrate to an automation execution environment ( export_custom_venv ) The below workflow describes how to migrate from legacy venvs to automation execution environments using the awx-manage command. 3.3. Migrating virtual envs to automation execution environments Use the following sections to assist with additional steps in the migration process once you have upgraded to Red Hat Ansible Automation Platform 2.0 and automation controller 4.0. 3.3.1. Listing custom virtual environments You can list the virtual environments on your automation controller instance using the awx-manage command. Procedure SSH into your automation controller instance and run: USD awx-manage list_custom_venvs A list of discovered virtual environments will appear. # Discovered virtual environments: /var/lib/awx/venv/testing /var/lib/venv/new_env To export the contents of a virtual environment, re-run while supplying the path as an argument: awx-manage export_custom_venv /path/to/venv 3.3.2. Viewing objects associated with a custom virtual environment View the organizations, jobs, and inventory sources associated with a custom virtual environment using the awx-manage command. Procedure SSH into your automation controller instance and run: USD awx-manage custom_venv_associations /path/to/venv A list of associated objects will appear. inventory_sources: - id: 15 name: celery job_templates: - id: 9 name: Demo Job Template @ 2:40:47 PM - id: 13 name: elephant organizations - id: 3 name: alternating_bongo_meow - id: 1 name: Default projects: [] 3.3.3. Selecting the custom virtual environment to export Select the custom virtual environment you want to export by using awx-manage export_custom_venv command. Procedure SSH into your automation controller instance and run: USD awx-manage export_custom_venv /path/to/venv The output from this command will show a pip freeze of what is in the specified virtual environment. This information can be copied into a requirements.txt file for Ansible Builder to use for creating a new automation execution environments image. numpy==1.20.2 pandas==1.2.4 python-dateutil==2.8.1 pytz==2021.1 six==1.16.0 To list all available custom virtual environments run: awx-manage list_custom_venvs Note Pass the -q flag when running awx-manage list_custom_venvs to reduce output. | [
"awx-manage list_custom_venvs",
"Discovered virtual environments: /var/lib/awx/venv/testing /var/lib/venv/new_env To export the contents of a virtual environment, re-run while supplying the path as an argument: awx-manage export_custom_venv /path/to/venv",
"awx-manage custom_venv_associations /path/to/venv",
"inventory_sources: - id: 15 name: celery job_templates: - id: 9 name: Demo Job Template @ 2:40:47 PM - id: 13 name: elephant organizations - id: 3 name: alternating_bongo_meow - id: 1 name: Default projects: []",
"awx-manage export_custom_venv /path/to/venv",
"numpy==1.20.2 pandas==1.2.4 python-dateutil==2.8.1 pytz==2021.1 six==1.16.0 To list all available custom virtual environments run: awx-manage list_custom_venvs"
]
| https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_upgrade_and_migration_guide/upgrading-to-ees |
6.12. Affinity Labels | 6.12. Affinity Labels 6.12.1. About Affinity Labels You can create and modify Affinity Labels in the Administration Portal. Affinity Labels are used together with Affinity Groups to set any kind of affinity between virtual machines and hosts (hard, soft, positive, negative). See the Affinity Groups section for more information about affinity hardness and polarity. Warning Affinity labels are a subset of affinity groups and can conflict with them. If there is a conflict, the virtual machine will not start. 6.12.2. Creating an Affinity Label You can create affinity labels from the details view of a virtual machine, host, or cluster. This procedure uses the cluster details view. Creating an Affinity Label Click Compute Clusters and select the appropriate cluster. Click the cluster's name to go to the details view. Click the Affinity Labels tab. Click New . Enter a Name for the affinity label. Use the drop-down lists to select the virtual machines and hosts to be associated with the label. Use the + button to add additional virtual machines and hosts. Click OK . 6.12.3. Editing an Affinity Label You can edit affinity labels from the details view of a virtual machine, host, or cluster. This procedure uses the cluster details view. Editing an Affinity Label Click Compute Clusters and select the appropriate cluster. Click the cluster's name to go to the details view. Click the Affinity Labels tab. Select the label you want to edit. Click Edit . Use the + and - buttons to add or remove virtual machines and hosts to or from the affinity label. Click OK . 6.12.4. Deleting an Affinity Label You can only remove an Affinity Label from the details view of a cluster after it is deleted from each entity. Deleting an Affinity Label Click Compute Clusters and select the appropriate cluster. Click the cluster's name to go to the details view. Click the Affinity Labels tab. Select the label you want to remove. Click Edit . Use the - buttons to remove all virtual machines and hosts from the label. Click OK . Click Delete . Click OK . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/sect-affinity_labels |
Chapter 3. Usage | Chapter 3. Usage This chapter describes the necessary steps for using Red Hat Software Collections 3.8, and deploying applications that use Red Hat Software Collections. 3.1. Using Red Hat Software Collections 3.1.1. Running an Executable from a Software Collection To run an executable from a particular Software Collection, type the following command at a shell prompt: scl enable software_collection ... ' command ...' Or, alternatively, use the following command: scl enable software_collection ... -- command ... Replace software_collection with a space-separated list of Software Collections you want to use and command with the command you want to run. For example, to execute a Perl program stored in a file named hello.pl with the Perl interpreter from the perl526 Software Collection, type: You can execute any command using the scl utility, causing it to be run with the executables from a selected Software Collection in preference to their possible Red Hat Enterprise Linux system equivalents. For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections Components" . 3.1.2. Running a Shell Session with a Software Collection as Default To start a new shell session with executables from a selected Software Collection in preference to their Red Hat Enterprise Linux equivalents, type the following at a shell prompt: scl enable software_collection ... bash Replace software_collection with a space-separated list of Software Collections you want to use. For example, to start a new shell session with the python27 and rh-postgresql12 Software Collections as default, type: The list of Software Collections that are enabled in the current session is stored in the USDX_SCLS environment variable, for instance: For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections Components" . 3.1.3. Running a System Service from a Software Collection In Red Hat Enterprise Linux 7, init scripts have been replaced by systemd service unit files, which end with the .service file extension and serve a similar purpose as init scripts. To start a service in the current session, execute the following command as root : systemctl start software_collection - service_name .service Replace software_collection with the name of the Software Collection and service_name with the name of the service you want to start. To configure this service to start automatically at boot time, type the following command as root : systemctl enable software_collection - service_name .service For example, to start the postgresql service from the rh-postgresql12 Software Collection and enable it at boot time, type as root : For more information on how to manage system services in Red Hat Enterprise Linux 7, refer to the Red Hat Enterprise Linux 7 System Administrator's Guide . For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections Components" . 3.2. Accessing a Manual Page from a Software Collection Every Software Collection contains a general manual page that describes the content of this component. Each manual page has the same name as the component and it is located in the /opt/rh directory. To read a manual page for a Software Collection, type the following command: scl enable software_collection 'man software_collection ' Replace software_collection with the particular Red Hat Software Collections component. For example, to display the manual page for rh-mariadb105 , type: 3.3. Deploying Applications That Use Red Hat Software Collections In general, you can use one of the following two approaches to deploy an application that depends on a component from Red Hat Software Collections in production: Install all required Software Collections and packages manually and then deploy your application, or Create a new Software Collection for your application and specify all required Software Collections and other packages as dependencies. For more information on how to manually install individual Red Hat Software Collections components, see Section 2.2, "Installing Red Hat Software Collections" . For further details on how to use Red Hat Software Collections, see Section 3.1, "Using Red Hat Software Collections" . For a detailed explanation of how to create a custom Software Collection or extend an existing one, read the Red Hat Software Collections Packaging Guide . 3.4. Red Hat Software Collections Container Images Container images based on Red Hat Software Collections include applications, daemons, and databases. The images can be run on Red Hat Enterprise Linux 7 Server and Red Hat Enterprise Linux Atomic Host. For information about their usage, see Using Red Hat Software Collections 3 Container Images . For details regarding container images based on Red Hat Software Collections versions 2.4 and earlier, see Using Red Hat Software Collections 2 Container Images . Note that only the latest version of each container image is supported. The following container images are available with Red Hat Software Collections 3.8: rhscl/devtoolset-12-toolchain-rhel7 (available since November 2022) rhscl/devtoolset-12-perftools-rhel7 (available since November 2022) rhscl/nginx-120-rhel7 rhscl/redis-6-rhel7 The following container images are based on Red Hat Software Collections 3.7: rhscl/mariadb-105-rhel7 rhscl/postgresql-13-rhel7 rhscl/ruby-30-rhel7 The following container images are based on Red Hat Software Collections 3.6: rhscl/httpd-24-rhel7 rhscl/nodej-14-rhel7 rhscl/perl-530-rhel7 rhscl/php-73-rhel7 The following container images are based on Red Hat Software Collections 3.5: rhscl/python-38-rhel7 rhscl/varnish-6-rhel7 The following container image is based on Red Hat Software Collections 3.4: rhscl/postgresql-12-rhel7 The following container image is based on Red Hat Software Collections 3.2: rhscl/mysql-80-rhel7 The following container image is based on Red Hat Software Collections 3.1: rhscl/postgresql-10-rhel7 The following container image is based on Red Hat Software Collections 2: rhscl/s2i-base-rhel7 | [
"~]USD scl enable rh-perl526 'perl hello.pl' Hello, World!",
"~]USD scl enable python27 rh-postgresql12 bash",
"~]USD echo USDX_SCLS python27 rh-postgresql12",
"~]# systemctl start rh-postgresql12-postgresql.service ~]# systemctl enable rh-postgresql12-postgresql.service",
"~]USD scl enable rh-mariadb105 \"man rh-mariadb105\""
]
| https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.8_release_notes/chap-usage |
Chapter 4. Specifics of Individual Software Collections | Chapter 4. Specifics of Individual Software Collections This chapter is focused on the specifics of certain Software Collections and provides additional details concerning these components. 4.1. Red Hat Developer Toolset Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. Red Hat Developer Toolset provides current versions of the GNU Compiler Collection , GNU Debugger , and other development, debugging, and performance monitoring tools. Similarly to other Software Collections, an additional set of tools is installed into the /opt/ directory. These tools are enabled by the user on demand using the supplied scl utility. Similarly to other Software Collections, these do not replace the Red Hat Enterprise Linux system versions of these tools, nor will they be used in preference to those system versions unless explicitly invoked using the scl utility. For an overview of features, refer to the Main Features section of the Red Hat Developer Toolset Release Notes . For a complete list of components, see the Red Hat Developer Toolset Components table in the Red Hat Developer Toolset User Guide . Note that since Red Hat Developer Toolset 3.1, Red Hat Developer Toolset requires the rh-java-common Software Collection. 4.2. Eclipse 4.6.3 The rh-eclipse46 Software Collection, available for Red Hat Enterprise Linux 7, includes Eclipse 4.6.3 , which is based on the Eclipse Foundation's Neon release train. This integrated development environment was previously available as a part of Red Hat Developer Toolset. Note that the rh-eclipse46 Software Collection requires the rh-java-common Collection. Note A new version of Eclipse is now available as the rh-eclipse47 component of the Red Hat Developer Tools offering. For more information, see Using Eclipse . Eclipse is a powerful development environment that provides tools for each phase of the development process. It integrates a variety of disparate tools into a unified environment to create a rich development experience, provides a fully configurable user interface, and features a pluggable architecture that allows for an extension in a variety of ways. For instance, the Valgrind plug-in allows programmers to perform memory profiling, otherwise performed on the command line, through the Eclipse user interface. Figure 4.1. Sample Eclipse Session Eclipse provides a graphical development environment alternative to traditional interaction with command line tools and as such, it is a welcome alternative to developers who do not want to use the command line interface. The traditional, mostly command line-based Linux tools suite (such as gcc or gdb ) and Eclipse offer two distinct approaches to programming. Note that if you intend to develop applications for Red Hat JBoss Middleware or require support for OpenShift Tools, it is recommended that you use Red Hat JBoss Developer Studio . Table 4.1. Eclipse Components Included in the rh-eclipse46 Software Collection Package Description rh-eclipse46-eclipse-cdt The C/C++ Development Tooling ( CDT ), which provides features and plug-ins for development in C and C++. rh-eclipse46-eclipse-changelog The ChangeLog plug-in, which allows you to create and maintain changelog files. rh-eclipse46-eclipse-egit EGit, a team provider for Eclipse that provides features and plug-ins for interaction with Git repositories. rh-eclipse46-eclipse-emf The Eclipse Modeling Framework ( EMF ), which allows you to build applications based on a structured data model. rh-eclipse46-eclipse-epp-logging The Eclipse error reporting tool. rh-eclipse46-eclipse-gcov The GCov plug-in, which integrates the GCov test coverage program with Eclipse . rh-eclipse46-eclipse-gef The Graphical Editing Framework ( GEF ), which allows you to create a rich graphical editor from an existing application model. rh-eclipse46-eclipse-gprof The Gprof plug-in, which integrates the Gprof performance analysis utility with Eclipse . rh-eclipse46-eclipse-jdt The Eclipse Java development tools ( JDT ) plug-in. rh-eclipse46-eclipse-jgit JGit, a Java implementation of the Git revision control system. rh-eclipse46-eclipse-manpage The Man Page plug-in, which allows you to view manual pages in Eclipse . rh-eclipse46-eclipse-mpc The Eclipse Marketplace Client. rh-eclipse46-eclipse-mylyn Mylyn, a task management system for Eclipse . rh-eclipse46-eclipse-oprofile The OProfile plug-in, which integrates OProfile with Eclipse . rh-eclipse46-eclipse-pde The Plugin Development Environment for developing Eclipse plugins. rh-eclipse46-eclipse-perf The Perf plug-in, which integrates the perf tool with Eclipse . rh-eclipse46-eclipse-ptp A subset of the PTP project providing support for synchronized projects. rh-eclipse46-eclipse-pydev A full featured Python IDE for Eclipse . rh-eclipse46-eclipse-remote The Remote Services plug-in, which provides an extensible remote-services framework. rh-eclipse46-eclipse-rpm-editor The Eclipse Spec File Editor, which allows you to maintain RPM spec files. rh-eclipse46-eclipse-rse The Remote System Explorer ( RSE ) framework, which allows you to work with remote systems from Eclipse . rh-eclipse46-eclipse-systemtap The SystemTap plug-in, which integrates SystemTap with Eclipse . rh-eclipse46-eclipse-valgrind The Valgrind plug-in, which integrates Valgrind with Eclipse . rh-eclipse46-eclipse-webtools The Eclipse Webtools plug-ins. 4.2.1. Installing Eclipse The Eclipse development environment is provided as a collection of RPM packages. To install the rh-eclipse46 Software Collection, type the following command as root : yum install rh-eclipse46 For a list of available components, see Table 4.1, "Eclipse Components Included in the rh-eclipse46 Software Collection" . Note The rh-eclipse46 Software Collection fully supports C, C++, and Java development, but does not provide support for the Fortran programming language. 4.2.2. Using Eclipse To start the rh-eclipse46 Software Collection, either select Applications Programming Red Hat Eclipse from the panel, or type the following at a shell prompt: scl enable rh-eclipse46 eclipse During its startup, Eclipse prompts you to select a workspace , that is, a directory in which you want to store your projects. You can either use ~/workspace/ , which is the default option, or click the Browse button to browse your file system and select a custom directory. Additionally, you can select the Use this as the default and do not ask again check box to prevent Eclipse from displaying this dialog box the time you run this development environment. When you are done, click the OK button to confirm the selection and proceed with the startup. 4.2.2.1. Using the Red Hat Developer Toolset Toolchain To use the rh-eclipse46 Software Collection with support for the GNU Compiler Collection and binutils from Red Hat Developer Toolset, make sure that the devtoolset-7-toolchain package is installed and run the application as described in Section 4.2.2, "Using Eclipse" . The rh-eclipse46 Collection uses the Red Hat Developer Toolset toolchain by default. For detailed instructions on how to install the devtoolset-7-toolchain package in your system, see the Red Hat Developer Toolset User Guide . Important If you are working on a project that you previously built with the Red Hat Enterprise Linux version of the GNU Compiler Collection , make sure that you discard all build results. To do so, open the project in Eclipse and select Project Clean from the menu. 4.2.2.2. Using the Red Hat Enterprise Linux Toolchain To use the rh-eclipse46 Software Collection with support for the toolchain distributed with Red Hat Enterprise Linux, change the configuration of the project to use absolute paths to the Red Hat Enterprise Linux system versions of gcc , g++ , and as . To configure Eclipse to explicitly use the Red Hat Enterprise Linux system versions of the tools for the current project, complete the following steps: In the C/C++ perspective, choose Project Properties from the main menu bar to open the project properties. In the menu on the left-hand side of the dialog box, click C/C++ Build Settings . Select the Tool Settings tab. If you are working on a C project: select GCC C Compiler or Cross GCC Compiler and change the value of the Command field to: select GCC C Linker or Cross GCC Linker and change the value of the Command field to: select GCC Assembler or Cross GCC Assembler and change the value of the Command field to: If you are working on a C++ project: select GCC C++ Compiler or Cross G++ Compiler and change the value of the Command field to: select GCC C Compiler or Cross GCC Compiler and change the value of the Command field to: select GCC C++ Linker or Cross G++ Linker and change the value of the Command field to: select GCC Assembler or Cross GCC Assembler and change the value of the Command field to: Click the OK button to save the configuration changes. 4.2.3. Additional Resources A detailed description of Eclipse and all its features is beyond the scope of this book. For more information, see the resources listed below. Installed Documentation Eclipse includes a built-in Help system, which provides extensive documentation for each integrated feature and tool. This greatly decreases the initial time investment required for new developers to become fluent in its use. The use of this Help section is detailed in the Red Hat Enterprise Linux Developer Guide linked below. See Also Using Eclipse describing usage of the rh-eclipse47 component of Red Hat Developer Tools. The Red Hat Developer Toolset chapter in the Red Hat Developer Toolset User Guide provides an overview of Red Hat Developer Toolset and more information on how to install it on your system. The GNU Compiler Collection (GCC) chapter in the Red Hat Developer Toolset User Guide provides information on how to compile programs written in C, C++, and Fortran on the command line. 4.3. Ruby on Rails 5.0 Red Hat Software Collections 3.0 provides the rh-ruby24 Software Collection together with the rh-ror50 Collection. To install Ruby on Rails 5.0 , type the following command as root : yum install rh-ror50 Installing any package from the rh-ror50 Software Collection automatically pulls in rh-ruby24 and rh-nodejs6 as dependencies. The rh-nodejs6 Collection is used by certain gems in an asset pipeline to post-process web resources, for example, sass or coffee-script source files. Additionally, the Action Cable framework uses rh-nodejs6 for handling WebSockets in Rails. To run the rails s command without requiring rh-nodejs6 , disable the coffee-rails and uglifier gems in the Gemfile . To run Ruby on Rails without Node.js , run the following command, which will automatically enable rh-ruby24 : scl enable rh-ror50 bash To run Ruby on Rails with all features, enable also the rh-nodejs6 Software Collection: scl enable rh-ror50 rh-nodejs6 bash The rh-ror50 Software Collection is supported together with the rh-ruby24 and rh-nodejs6 components. 4.4. MongoDB 3.4 To install the rh-mongodb34 collection, type the following command as root : yum install rh-mongodb34 To run the MongoDB shell utility, type the following command: scl enable rh-mongodb34 'mongo' Note The rh-mongodb34-mongo-cxx-driver package has been built with the -std=gnu++14 option using GCC from Red Hat Developer Toolset 6. Binaries using the shared library for the MongoDB C++ Driver that use C++11 (or later) features have to be built also with Red Hat Developer Toolset 6. See C++ compatibility details in the Red Hat Developer Toolset 6 User Guide . MongoDB 3.4 on Red Hat Enterprise Linux 6 If you are using Red Hat Enterprise Linux 6, the following instructions apply to your system. To start the MongoDB daemon, type the following command as root : service rh-mongodb34-mongod start To start the MongoDB daemon on boot, type this command as root : chkconfig rh-mongodb34-mongod on To start the MongoDB sharding server, type this command as root : service rh-mongodb34-mongos start To start the MongoDB sharding server on boot, type the following command as root : chkconfig rh-mongodb34-mongos on Note that the MongoDB sharding server does not work unless the user starts at least one configuration server and specifies it in the mongos.conf file. MongoDB 3.4 on Red Hat Enterprise Linux 7 When using Red Hat Enterprise Linux 7, the following commands are applicable. To start the MongoDB daemon, type the following command as root : systemctl start rh-mongodb34-mongod.service To start the MongoDB daemon on boot, type this command as root : systemctl enable rh-mongodb34-mongod.service To start the MongoDB sharding server, type the following command as root : systemctl start rh-mongodb34-mongos.service To start the MongoDB sharding server on boot, type this command as root : systemctl enable rh-mongodb34-mongos.service Note that the MongoDB sharding server does not work unless the user starts at least one configuration server and specifies it in the mongos.conf file. 4.5. Git Git is a distributed revision control system with a decentralized architecture. As opposed to centralized version control systems with a client-server model, Git ensures that each working copy of a Git repository is an exact copy with complete revision history. This not only allows you to work on and contribute to projects without the need to have permission to push your changes to their official repositories, but also makes it possible for you to work with no network connection. For detailed information, see the Git chapter in the Red Hat Enterprise Linux 7 Developer Guide . 4.6. Maven The rh-maven35 Software Collection, available only for Red Hat Enterprise Linux 7, provides a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting, and documentation from a central piece of information. To install the rh-maven35 Collection, type the following command as root : yum install rh-maven35 To enable this collection, type the following command at a shell prompt: scl enable rh-maven35 bash Global Maven settings, such as remote repositories or mirrors, can be customized by editing the /opt/rh/rh-maven35/root/etc/maven/settings.xml file. For more information about using Maven, refer to the Maven documentation . Usage of plug-ins is described in this section ; to find documentation regarding individual plug-ins, see the index of plug-ins . 4.7. Passenger The rh-passenger40 Software Collection provides Phusion Passenger , a web and application server designed to be fast, robust and lightweight. The rh-passenger40 Collection supports multiple versions of Ruby , particularly the ruby193 , ruby200 , and rh-ruby22 Software Collections together with Ruby on Rails using the ror40 or rh-ror41 Collections. Prior to using Passenger with any of the Ruby Software Collections, install the corresponding package from the rh-passenger40 Collection: the rh-passenger-ruby193 , rh-passenger-ruby200 , or rh-passenger-ruby22 package. The rh-passenger40 Software Collection can also be used with Apache httpd from the httpd24 Software Collection. To do so, install the rh-passenger40-mod_passenger package. Refer to the default configuration file /opt/rh/httpd24/root/etc/httpd/conf.d/passenger.conf for an example of Apache httpd configuration, which shows how to use multiple Ruby versions in a single Apache httpd instance. Additionally, the rh-passenger40 Software Collection can be used with the nginx 1.6 web server from the nginx16 Software Collection. To use nginx 1.6 with rh-passenger40 , you can run Passenger in Standalone mode using the following command in the web appplication's directory: scl enable nginx16 rh-passenger40 'passenger start' Alternatively, edit the nginx16 configuration files as described in the upstream Passenger documentation . 4.8. Database Connectors Database connector packages provide the database client functionality, which is necessary for local or remote connection to a database server. Table 4.2, "Interoperability Between Languages and Databases" lists Software Collections with language runtimes that include connectors for certain database servers. Table 4.2. Interoperability Between Languages and Databases Database Language (Software Collection) MariaDB MongoDB MySQL PostgreSQL Redis rh-nodejs4 rh-nodejs6 rh-nodejs8 rh-perl520 rh-perl524 rh-php56 rh-php70 rh-php71 python27 rh-python34 rh-python35 rh-python36 rh-ror50 rh-ror42 rh-ror41 Supported Unsupported | [
"/usr/bin/gcc",
"/usr/bin/gcc",
"/usr/bin/as",
"/usr/bin/g++",
"/usr/bin/gcc",
"/usr/bin/g++",
"/usr/bin/as"
]
| https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.0_release_notes/chap-individual_collections |
Providing feedback on JBoss EAP documentation | Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/updating_red_hat_jboss_enterprise_application_platform/proc_providing-feedback-on-red-hat-documentation_default |
6.2. Native Client | 6.2. Native Client Native Client is a FUSE-based client running in user space. Native Client is the recommended method for accessing Red Hat Gluster Storage volumes when high concurrency and high write performance is required. This section introduces Native Client and describes how to perform the following: Install Native Client packages Mount Red Hat Gluster Storage volumes (manually and automatically) Verify that the Gluster Storage volume has mounted successfully Note Red Hat Gluster Storage server supports the Native Client version which is the same as the server version and the preceding version of Native Client . For list of releases see: https://access.redhat.com/solutions/543123 . From Red Hat Gluster Storage 3.5 batch update 7 onwards, glusterfs-6.0-62 and higher version of glusterFS Native Client is only available via rh-gluster-3-client-for-rhel-8-x86_64-rpms for Red Hat Gluster Storage based on Red Hat Enterprise Enterprise Linux (RHEL 8) and rh-gluster-3-client-for-rhel-7-server-rpms for Red Hat Gluster Storage based on RHEL 7. Table 6.4. Red Hat Gluster Storage Support Matrix Red Hat Enterprise Linux version Red Hat Gluster Storage version Native client version 6.5 3.0 3.0, 2.1* 6.6 3.0.2, 3.0.3, 3.0.4 3.0, 2.1* 6.7 3.1, 3.1.1, 3.1.2 3.1, 3.0, 2.1* 6.8 3.1.3 3.1.3 6.9 3.2 3.2, 3.1.3* 6.9 3.3 3.3, 3.2 6.9 3.3.1 3.3.1, 3.3, 3.2 6.10 3.4 3.5*, 3.4, 3.3.z 7.1 3.1, 3.1.1 3.1.1, 3.1, 3.0 7.2 3.1.2 3.1.2, 3.1, 3.0 7.2 3.1.3 3.1.3 7.3 3.2 3.2, 3.1.3 7.4 3.2 3.2, 3.1.3 7.4 3.3 3.3, 3.2 7.4 3.3.1 3.3.1, 3.3, 3.2 7.5 3.3.1, 3.4 3.3.z, 3.4.z 7.6 3.3.1, 3.4 3.3.z, 3.4.z 7.7 3.5.1 3.4.z, 3.5.z 7.8 3.5.2 3.4.z, 3.5.z 7.9 3.5.3, 3.5.4, 3.5.5, 3.5.6, 3.5.7 3.4.z, 3.5.z 8.1 NA 3.5 8.2 3.5.2 3.5.z 8.3 3.5.3 3.5.z 8.4 3.5.4 3.5.z 8.5 3.5.5, 3.5.6 3.5.z 8.6 3.5.7 3.5.z Warning Red Hat Gluster Storage 3.5 supports RHEL 6.x using Native Client 3.5. Warning For Red Hat Gluster Storage 3.5, Red Hat supports only Red Hat Gluster Storage 3.4 and 3.5 clients. For more information on the release version see, https://access.redhat.com/solutions/543123 . 6.2.1. Installing Native Client After installing the client operating system, register the target system to Red Hat Network and subscribe to the Red Hat Enterprise Linux Server channel. There are two ways to register and subscribe a system to Red Hat Subscription Management: Use the Command Line to Register and Subscribe a System to Red Hat Subscription Management Use the Web Interface to Register and Subscribe a System to Red Hat Subscription Management Important All clients must be of the same version. Red Hat strongly recommends upgrading the servers before upgrading the clients. Use the Command Line to Register and Subscribe a System to Red Hat Subscription Management Register the system using the command line, and subscribe to the correct repositories. Prerequisites Know the user name and password of the Red Hat Subscription Manager account with Red Hat Gluster Storage entitlements. Run the subscription-manager register command to list the available pools. Select the appropriate pool and enter your Red Hat Subscription Manager user name and password to register the system with Red Hat Subscription Manager. Depending on your client, run one of the following commands to subscribe to the correct repositories. For Red Hat Enterprise Linux 8 clients: For Red Hat Enterprise Linux 7.x clients: Note The following command can also be used, but Red Hat Gluster Storage may deprecate support for this repository in future releases. For Red Hat Enterprise Linux 6.1 and later clients: For more information on subscriptions, refer to Section 3.1 Registering and attaching a system from the Command Line in Using and Configuring Red Hat Subscription Management . Verify that the system is subscribed to the required repositories. Use the Web Interface to Register and Subscribe a System to Red Hat Subscription Management Register the system using the web interface, and subscribe to the correct channels. Prerequisites Know the user name and password of the Red Hat Subsrciption Management (RHSM) account with Red Hat Gluster Storage entitlements. Log on to Red Hat Subscription Management ( https://access.redhat.com/management ). Click the Systems link at the top of the screen. Click the name of the system to which the Red Hat Gluster Storage Native Client channel must be appended. Click Alter Channel Subscriptions in the Subscribed Channels section of the screen. Expand the node for Additional Services Channels for Red Hat Enterprise Linux 7 for x86_64 or for x86_64 or for Red Hat Enterprise Linux 5 for x86_64 depending on the client platform. Click the Change Subscriptions button to finalize the changes. When the page refreshes, select the Details tab to verify the system is subscribed to the appropriate channels. Install Native Client Packages Install Native Client packages from Red Hat Network Prerequisites Use the Command Line to Register and Subscribe a System to Red Hat Subscription Management or Use the Web Interface to Register and Subscribe a System to Red Hat Subscription Management Run the yum install command to install the native client RPM packages. For Red Hat Enterprise 5.x client systems, run the modprobe command to load FUSE modules before mounting Red Hat Gluster Storage volumes. For more information on loading modules at boot time, see https://access.redhat.com/knowledge/solutions/47028 . 6.2.2. Upgrading Native Client Before updating the Native Client, subscribe the clients to the channels mentioned in Section 6.2.1, "Installing Native Client" Unmount gluster volumes Unmount any gluster volumes prior to upgrading the native client. Upgrade the client Run the yum update command to upgrade the native client: Remount gluster volumes Remount volumes as discussed in Section 6.2.3, "Mounting Red Hat Gluster Storage Volumes" . 6.2.3. Mounting Red Hat Gluster Storage Volumes After installing Native Client, the Red Hat Gluster Storage volumes must be mounted to access data. Three methods are available: Section 6.2.3.2, "Mounting Volumes Manually" Section 6.2.3.3, "Mounting Volumes Automatically" Section 6.2.3.4, "Manually Mounting Sub-directories Using Native Client" After mounting a volume, test the mounted volume using the procedure described in Section 6.2.3.5, "Testing Mounted Volumes" . Note Clients should be on the same version as the server, and at least on the version immediately to the server version. For Red Hat Gluster Storage 3.5, the recommended native client version should either be 3.4.z, and 3.5. For other versions, see Section 6.2, "Native Client" . Server names selected during volume creation should be resolvable in the client machine. Use appropriate /etc/hosts entries, or a DNS server to resolve server names to IP addresses. Internet Protocol Version 6 (IPv6) support is available only for Red Hat Hyperconverged Infrastructure for Virtualization environments and not for Red Hat Gluster Storage standalone environments. 6.2.3.1. Mount Commands and Options The following options are available when using the mount -t glusterfs command. All options must be separated with commas. backup-volfile-servers=<volfile_server2>:<volfile_server3>:...:<volfile_serverN> List of the backup volfile servers to mount the client. If this option is specified while mounting the fuse client, when the first volfile server fails, the servers specified in backup-volfile-servers option are used as volfile servers to mount the client until the mount is successful. Note This option was earlier specified as backupvolfile-server which is no longer valid. log-level Logs only specified level or higher severity messages in the log-file . log-file Logs the messages in the specified file. transport-type Specifies the transport type that FUSE client must use to communicate with bricks. If the volume was created with only one transport type, then that becomes the default when no value is specified. In case of tcp,rdma volume, tcp is the default. dump-fuse This mount option creates dump of fuse traffic between the glusterfs client (fuse userspace server) and the kernel. The interface to mount a glusterfs volume is the standard mount(8) command from the CLI. This feature enables the same in the mount option. # mount -t glusterfs -odump-fuse= filename hostname :/ volname mount-path For example, The above command generates a binary file with the name dumpfile . Note The fusedump grows large with time and notably if the client gets a heavy load. So this is not an intended use case to do fusedump during normal usage. It is advised to use this to get a dump from a particular scenario, for diagnostic purposes. You need to unmount and remount the volume without the fusedump option to stop dumping. ro Mounts the file system with read-only permissions. acl Enables POSIX Access Control List on mount. See Section 6.5.4, "Checking ACL enablement on a mounted volume" for further information. background-qlen= n Enables FUSE to handle n number of requests to be queued before subsequent requests are denied. Default value of n is 64. enable-ino32 Enables file system to present 32-bit inodes instead of 64-bit inodes. reader-thread-count= n Enables FUSE to add n number of reader threads that can give better I/O performance. Default value of n is 1 . lru-limit This mount command option clears the inodes from the least recently used (lru) list (which keeps non-referenced inodes) after the inode limit has reached. For example, Where NNNN is a positive integer. The default value of NNNN is 128k (131072) and the recommended value is 20000 and above. If 0 is specified as the lru-limit then it means that no invalidation of inodes from the lru-list. 6.2.3.2. Mounting Volumes Manually Manually Mount a Red Hat Gluster Storage Volume or Subdirectory Create a mount point and run the following command as required: For a Red Hat Gluster Storage Volume mount -t glusterfs HOSTNAME|IPADDRESS :/ VOLNAME / MOUNTDIR For a Red Hat Gluster Storage Volume's Subdirectory mount -t glusterfs HOSTNAME|IPADDRESS :/ VOLNAME/ SUBDIRECTORY / MOUNTDIR Note The server specified in the mount command is used to fetch the glusterFS configuration volfile, which describes the volume name. The client then communicates directly with the servers mentioned in the volfile (which may not actually include the server used for mount). If a mount point has not yet been created for the volume, run the mkdir command to create a mount point. Run the mount -t glusterfs command, using the key in the task summary as a guide. For a Red Hat Gluster Storage Volume: For a Red Hat Gluster Storage Volume's Subdirectory 6.2.3.3. Mounting Volumes Automatically Volumes can be mounted automatically each time the systems starts. The server specified in the mount command is used to fetch the glusterFS configuration volfile, which describes the volume name. The client then communicates directly with the servers mentioned in the volfile (which may not actually include the server used for mount). Mounting a Volume Automatically Mount a Red Hat Gluster Storage Volume automatically at server start. Open the /etc/fstab file in a text editor. Append the following configuration to the fstab file: For a Red Hat Gluster Storage Volume For a Red Hat Gluster Storage Volume's Subdirectory Using the example server names, the entry contains the following replaced values. OR If you want to specify the transport type then check the following example: OR 6.2.3.4. Manually Mounting Sub-directories Using Native Client With Red Hat Gluster Storage 3.x, you can share a single Gluster volume with different clients and they all can mount only a subset of the volume namespace. This feature is similar to the NFS subdirectory mount feature where you can export a subdirectory of an already exported volume. You can also use this feature to restrict full access to any particular volume. Mounting subdirectories provides the following benefits: Provides namespace isolation so that multiple users can access the storage without risking namespace collision with other users. Prevents the root file system from becoming full in the event of a mount failure. You can mount a subdirectory using native client by running either of the following commands: OR For example: In the above example: The auth.allow option allows only the directories specified as the value of the auth.allow option to be mounted. Each group of auth-allow is separated by a comma ( , ). Each group has a directory separated by parentheses, () , which contains the valid IP addresses. All subdirectories start with / , that is, no relative path to a volume, but everything is an absolute path, taking / as the root directory of the volume. Note By default, the authentication is * , where any given subdirectory in a volume can be mounted by all clients. 6.2.3.5. Testing Mounted Volumes Testing Mounted Red Hat Gluster Storage Volumes Using the command-line, verify the Red Hat Gluster Storage volumes have been successfully mounted. All three commands can be run in the order listed, or used independently to verify a volume has been successfully mounted. Prerequisites Section 6.2.3.3, "Mounting Volumes Automatically" , or Section 6.2.3.2, "Mounting Volumes Manually" Run the mount command to check whether the volume was successfully mounted. OR If transport option is used while mounting a volume, mount status will have the transport type appended to the volume name. For example, for transport=tcp: OR Run the df command to display the aggregated storage space from all the bricks in a volume. Move to the mount directory using the cd command, and list the contents. | [
"subscription-manager register",
"subscription-manager repos --enable=rh-gluster-3-client-for-rhel-8-x86_64-rpms",
"subscription-manager repos --enable=rhel-7-server-rpms --enable=rh-gluster-3-client-for-rhel-7-server-rpms",
"subscription-manager repos --enable=rhel-7-server-rh-common-rpms",
"subscription-manager repos --enable=rhel-6-server-rpms --enable=rhel-6-server-rhs-client-1-rpms",
"yum repolist",
"yum install glusterfs glusterfs-fuse",
"modprobe fuse",
"umount /mnt/glusterfs",
"yum update glusterfs glusterfs-fuse",
"mount -t glusterfs -o backup-volfile- servers=volfile_server2:volfile_server3:.... ..:volfile_serverN ,transport-type tcp,log-level=WARNING,reader-thread-count=2,log-file=/var/log/gluster.log server1:/test-volume /mnt/glusterfs",
"mount -t glusterfs -odump-fuse=/dumpfile 10.70.43.18:/arbiter /mnt/arbiter",
"mount -olru-limit= NNNN -t glusterfs hostname :/ volname /mnt/mountdir",
"mkdir /mnt/glusterfs",
"mount -t glusterfs server1:/test-volume /mnt/glusterfs",
"mount -t glusterfs server1:/test-volume/sub-dir /mnt/glusterfs",
"HOSTNAME|IPADDRESS :/ VOLNAME / MOUNTDIR glusterfs defaults,_netdev 0 0",
"HOSTNAME|IPADDRESS :/ VOLNAME / SUBDIRECTORY / MOUNTDIR glusterfs defaults,_netdev 0 0",
"server1:/test-volume /mnt/glusterfs glusterfs defaults,_netdev 0 0",
"server1:/test-volume/subdir /mnt/glusterfs glusterfs defaults,_netdev 0 0",
"server1:/test-volume /mnt/glusterfs glusterfs defaults,_netdev,transport=tcp 0 0",
"server1:/test-volume/sub-dir /mnt/glusterfs glusterfs defaults,_netdev,transport=tcp 0 0",
"mount -t glusterfs hostname :/ volname / subdir / mount-point",
"mount -t glusterfs hostname :/ volname -osubdir-mount= subdir / mount-point",
"gluster volume set test-vol auth.allow \"/(192.168.10.*|192.168.11.*),/subdir1(192.168.1.*),/subdir2(192.168.8.*)\"",
"mount server1:/test-volume on /mnt/glusterfs type fuse.glusterfs(rw,allow_other,default_permissions,max_read=131072",
"mount server1:/test-volume/sub-dir on /mnt/glusterfs type fuse.glusterfs(rw,allow_other,default_permissions,max_read=131072",
"mount server1:/test-volume.tcp on /mnt/glusterfs type fuse.glusterfs(rw,allow_other,default_permissions,max_read=131072",
"mount server1:/test-volume/sub-dir.tcp on /mnt/glusterfs type fuse.glusterfs(rw,allow_other,default_permissions,max_read=131072",
"df -h /mnt/glusterfs Filesystem Size Used Avail Use% Mounted on server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs",
"cd /mnt/glusterfs ls"
]
| https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-native_client |
Chapter 15. Managing Kerberos ticket policies | Chapter 15. Managing Kerberos ticket policies Kerberos ticket policies in Identity Management (IdM) set restrictions on Kerberos ticket access, duration, and renewal. You can configure Kerberos ticket policies for the Key Distribution Center (KDC) running on your IdM server. The following concepts and operations are performed when managing Kerberos ticket policies: The role of the IdM KDC IdM Kerberos ticket policy types Kerberos authentication indicators Enforcing authentication indicators for an IdM service Configuring the global ticket lifecycle policy Configuring global ticket policies per authentication indicator Configuring the default ticket policy for a user Configuring individual authentication indicator ticket policies for a user Authentication indicator options for the krbtpolicy-mod command 15.1. The role of the IdM KDC Identity Management's authentication mechanisms use the Kerberos infrastructure established by the Key Distribution Center (KDC). The KDC is the trusted authority that stores credential information and ensures the authenticity of data originating from entities within the IdM network. Each IdM user, service, and host acts as a Kerberos client and is identified by a unique Kerberos principal : For users: identifier@REALM , such as [email protected] For services: service/fully-qualified-hostname@REALM , such as http/[email protected] For hosts: host/fully-qualified-hostname@REALM , such as host/[email protected] The following image is a simplification of the communication between a Kerberos client, the KDC, and a Kerberized application that the client wants to communicate with. A Kerberos client identifies itself to the KDC by authenticating as a Kerberos principal. For example, an IdM user performs kinit username and provides their password. The KDC checks for the principal in its database, authenticates the client, and evaluates Kerberos ticket policies to determine whether to grant the request. The KDC issues the client a ticket-granting ticket (TGT) with a lifecycle and authentication indicators according to the appropriate ticket policy. With the TGT, the client requests a service ticket from the KDC to communicate with a Kerberized service on a target host. The KDC checks if the client's TGT is still valid, and evaluates the service ticket request against ticket policies. The KDC issues the client a service ticket . With the service ticket, the client can initiate encrypted communication with the service on the target host. 15.2. IdM Kerberos ticket policy types IdM Kerberos ticket policies implement the following ticket policy types: Connection policy To protect Kerberized services with different levels of security, you can define connection policies to enforce rules based on which pre-authentication mechanism a client used to retrieve a ticket-granting ticket (TGT). For example, you can require smart card authentication to connect to client1.example.com , and require two-factor authentication to access the testservice application on client2.example.com . To enforce connection policies, associate authentication indicators with services. Only clients that have the required authentication indicators in their service ticket requests are able to access those services. For more information, see Kerberos authentication indicators . Ticket lifecycle policy Each Kerberos ticket has a lifetime and a potential renewal age : you can renew a ticket before it reaches its maximum lifetime, but not after it exceeds its maximum renewal age. The default global ticket lifetime is one day (86400 seconds) and the default global maximum renewal age is one week (604800 seconds). To adjust these global values, see Configuring the global ticket lifecycle policy . You can also define your own ticket lifecycle policies: To configure different global ticket lifecycle values for each authentication indicator, see Configuring global ticket policies per authentication indicator . To define ticket lifecycle values for a single user that apply regardless of the authentication method used, see Configuring the default ticket policy for a user . To define individual ticket lifecycle values for each authentication indicator that only apply to a single user, see Configuring individual authentication indicator ticket policies for a user . 15.3. Kerberos authentication indicators The Kerberos Key Distribution Center (KDC) attaches authentication indicators to a ticket-granting ticket (TGT) based on which pre-authentication mechanism the client used to prove its identity: otp two-factor authentication (password + One-Time Password) radius RADIUS authentication (commonly for 802.1x authentication) pkinit PKINIT, smart card, or certificate authentication hardened hardened passwords (SPAKE or FAST) [1] The KDC then attaches the authentication indicators from the TGT to any service ticket requests that stem from it. The KDC enforces policies such as service access control, maximum ticket lifetime, and maximum renewable age based on the authentication indicators. Authentication indicators and IdM services If you associate a service or a host with an authentication indicator, only clients that used the corresponding authentication mechanism to obtain a TGT will be able to access it. The KDC, not the application or service, checks for authentication indicators in service ticket requests, and grants or denies requests based on Kerberos connection policies. For example, to require two-factor authentication to connect to a Virtual Private Network (VPN), associate the otp authentication indicator with that service. Only users who used a One-Time password to obtain their initial TGT from the KDC will be able to log in to the VPN: Figure 15.1. Example of a VPN service requiring the otp authentication indicator If a service or a host has no authentication indicators assigned to it, it will accept tickets authenticated by any mechanism. Additional resources Enforcing authentication indicators for an IdM service Enabling GSSAPI authentication and enforcing Kerberos authentication indicators for sudo on an IdM client 15.4. Enforcing authentication indicators for an IdM service The authentication mechanisms supported by Identity Management (IdM) vary in their authentication strength. For example, obtaining the initial Kerberos ticket-granting ticket (TGT) using a one-time password (OTP) in combination with a standard password is considered more secure than authentication using only a standard password. By associating authentication indicators with a particular IdM service, you can, as an IdM administrator, configure the service so that only users who used those specific pre-authentication mechanisms to obtain their initial ticket-granting ticket (TGT) will be able to access the service. In this way, you can configure different IdM services so that: Only users who used a stronger authentication method to obtain their initial TGT, such as a one-time password (OTP), can access services critical to security, such as a VPN. Users who used simpler authentication methods to obtain their initial TGT, such as a password, can only access non-critical services, such as local logins. Figure 15.2. Example of authenticating using different technologies This procedure describes creating an IdM service and configuring it to require particular Kerberos authentication indicators from incoming service ticket requests. 15.4.1. Creating an IdM service entry and its Kerberos keytab Adding an IdM service entry to IdM for a service running on an IdM host creates a corresponding Kerberos principal, and allows the service to request an SSL certificate, a Kerberos keytab, or both. The following procedure describes creating an IdM service entry and generating an associated Kerberos keytab for encrypting communication with that service. Prerequisites Your service can store a Kerberos principal, an SSL certificate, or both. Procedure Add an IdM service with the ipa service-add command to create a Kerberos principal associated with it. For example, to create the IdM service entry for the testservice application that runs on host client.example.com : Generate and store a Kerberos keytab for the service on the client. Verification Display information about an IdM service with the ipa service-show command. Display the contents of the service's Kerberos keytab with the klist command. 15.4.2. Associating authentication indicators with an IdM service using IdM CLI As an Identity Management (IdM) administrator, you can configure a host or a service to require that a service ticket presented by the client application contains a specific authentication indicator. For example, you can ensure that only users who used a valid IdM two-factor authentication token with their password when obtaining a Kerberos ticket-granting ticket (TGT) will be able to access that host or service. Follow this procedure to configure a service to require particular Kerberos authentication indicators from incoming service ticket requests. Prerequisites You have created an IdM service entry for a service that runs on an IdM host. See Creating an IdM service entry and its Kerberos keytab . You have obtained the ticket-granting ticket of an administrative user in IdM. Warning Do not assign authentication indicators to internal IdM services. The following IdM services cannot perform the interactive authentication steps required by PKINIT and multi-factor authentication methods: Procedure Use the ipa service-mod command to specify one or more required authentication indicators for a service, identified with the --auth-ind argument. Authentication method --auth-ind value Two-factor authentication otp RADIUS authentication radius PKINIT, smart card, or certificate authentication pkinit Hardened passwords (SPAKE or FAST) hardened For example, to require that a user was authenticated with smart card or OTP authentication to retrieve a service ticket for the testservice principal on host client.example.com : Note To remove all authentication indicators from a service, provide an empty list of indicators: Verification Display information about an IdM service, including the authentication indicators it requires, with the ipa service-show command. Additional resources Retrieving a Kerberos service ticket for an IdM service Enabling GSSAPI authentication and enforcing Kerberos authentication indicators for sudo on an IdM client 15.4.3. Associating authentication indicators with an IdM service using IdM Web UI As an Identity Management (IdM) administrator, you can configure a host or a service to require a service ticket presented by the client application to contain a specific authentication indicator. For example, you can ensure that only users who used a valid IdM two-factor authentication token with their password when obtaining a Kerberos ticket-granting ticket (TGT) will be able to access that host or service. Follow this procedure to use the IdM Web UI to configure a host or service to require particular Kerberos authentication indicators from incoming ticket requests. Prerequisites You have logged in to the IdM Web UI as an administrative user. Procedure Select Identity Hosts or Identity Services . Click the name of the required host or service. Under Authentication indicators , select the required authentication method. For example, selecting OTP ensures that only users who used a valid IdM two-factor authentication token with their password when obtaining a Kerberos TGT will be able to access the host or service. If you select both OTP and RADIUS , then both users that used a valid IdM two-factor authentication token with their password when obtaining a Kerberos TGT and users that used the RADIUS server for obtaining their Kerberos TGT will be allowed access. Click Save at the top of the page. Additional resources Retrieving a Kerberos service ticket for an IdM service Enabling GSSAPI authentication and enforcing Kerberos authentication indicators for sudo on an IdM client 15.4.4. Retrieving a Kerberos service ticket for an IdM service The following procedure describes retrieving a Kerberos service ticket for an IdM service. You can use this procedure to test Kerberos ticket policies, such as enforcing that certain Kerberos authentication indicators are present in a ticket-granting ticket (TGT). Prerequisites If the service you are working with is not an internal IdM service, you have created a corresponding IdM service entry for it. See Creating an IdM service entry and its Kerberos keytab . You have a Kerberos ticket-granting ticket (TGT). Procedure Use the kvno command with the -S option to retrieve a service ticket, and specify the name of the IdM service and the fully-qualified domain name of the host that manages it. Note If you need to access an IdM service and your current ticket-granting ticket (TGT) does not possess the required Kerberos authentication indicators associated with it, clear your current Kerberos credentials cache with the kdestroy command and retrieve a new TGT: For example, if you initially retrieved a TGT by authenticating with a password, and you need to access an IdM service that has the pkinit authentication indicator associated with it, destroy your current credentials cache and re-authenticate with a smart card. See Kerberos authentication indicators . Verification Use the klist command to verify that the service ticket is in the default Kerberos credentials cache. 15.4.5. Additional resources See Kerberos authentication indicators . 15.5. Configuring the global ticket lifecycle policy The global ticket policy applies to all service tickets and to users that do not have any per-user ticket policies defined. The following procedure describes adjusting the maximum ticket lifetime and maximum ticket renewal age for the global Kerberos ticket policy using the ipa krbtpolicy-mod command. While using the ipa krbtpolicy-mod command, specify at least one of the following arguments: --maxlife for the maximum ticket lifetime in seconds --maxrenew for the maximum renewable age in seconds Procedure To modify the global ticket policy: In this example, the maximum lifetime is set to eight hours (8 * 60 minutes * 60 seconds) and the maximum renewal age is set to one day (24 * 60 minutes * 60 seconds). Optional: To reset the global Kerberos ticket policy to the default installation values: Verification Display the global ticket policy: Additional resources Configuring the default ticket policy for a user Configuring individual authentication indicator ticket policies for a user 15.6. Configuring global ticket policies per authentication indicator Follow this procedure to adjust the global maximum ticket lifetime and maximum renewable age for each authentication indicator. These settings apply to users that do not have per-user ticket policies defined. Use the ipa krbtpolicy-mod command to specify the global maximum lifetime or maximum renewable age for Kerberos tickets depending on the authentication indicators attached to them. Procedure For example, to set the global two-factor ticket lifetime and renewal age values to one week, and the global smart card ticket lifetime and renewal age values to two weeks: Verification Display the global ticket policy: Notice that the OTP and PKINIT values are different from the global default Max life and Max renew values. Additional resources Authentication indicator options for the krbtpolicy-mod command Configuring the default ticket policy for a user Configuring individual authentication indicator ticket policies for a user 15.7. Configuring the default ticket policy for a user You can define a unique Kerberos ticket policy that only applies to a single user. These per-user settings override the global ticket policy, for all authentication indicators. Use the ipa krbtpolicy-mod username command, and specify at least one of the following arguments: --maxlife for the maximum ticket lifetime in seconds --maxrenew for the maximum renewable age in seconds Procedure For example, to set the IdM admin user's maximum ticket lifetime to two days and maximum renewal age to two weeks: Optional: To reset the ticket policy for a user: Verification Display the effective Kerberos ticket policy that applies to a user: Additional resources Configuring the global ticket lifecycle policy Configuring global ticket policies per authentication indicator 15.8. Configuring individual authentication indicator ticket policies for a user As an administrator, you can define Kerberos ticket policies for a user that differ per authentication indicator. For example, you can configure a policy to allow the IdM admin user to renew a ticket for two days if it was obtained with OTP authentication, and a week if it was obtained with smart card authentication. These per-authentication indicator settings will override the user's default ticket policy, the global default ticket policy, and any global authentication indicator ticket policy. Use the ipa krbtpolicy-mod username command to set custom maximum lifetime and maximum renewable age values for a user's Kerberos tickets depending on the authentication indicators attached to them. Procedure For example, to allow the IdM admin user to renew a Kerberos ticket for two days if it was obtained with One-Time Password authentication, set the --otp-maxrenew option: Optional: To reset the ticket policy for a user: Verification Display the effective Kerberos ticket policy that applies to a user: Additional resources Authentication indicator options for the krbtpolicy-mod command Configuring the default ticket policy for a user Configuring the global ticket lifecycle policy Configuring global ticket policies per authentication indicator 15.9. Authentication indicator options for the krbtpolicy-mod command Specify values for authentication indicators with the following arguments. Table 15.1. Authentication indicator options for the krbtpolicy-mod command Authentication indicator Argument for maximum lifetime Argument for maximum renewal age otp --otp-maxlife --otp-maxrenew radius --radius-maxlife --radius-maxrenew pkinit --pkinit-maxlife --pkinit-maxrenew hardened --hardened-maxlife --hardened-maxrenew [1] A hardened password is protected against brute-force password dictionary attacks by using Single-Party Public-Key Authenticated Key Exchange (SPAKE) pre-authentication and/or Flexible Authentication via Secure Tunneling (FAST) armoring. | [
"ipa service-add testservice/client.example.com ------------------------------------------------------------- Modified service \"testservice/[email protected]\" ------------------------------------------------------------- Principal name: testservice/[email protected] Principal alias: testservice/[email protected] Managed by: client.example.com",
"ipa-getkeytab -k /etc/testservice.keytab -p testservice/client.example.com Keytab successfully retrieved and stored in: /etc/testservice.keytab",
"ipa service-show testservice/client.example.com Principal name: testservice/[email protected] Principal alias: testservice/[email protected] Keytab: True Managed by: client.example.com",
"klist -ekt /etc/testservice.keytab Keytab name: FILE:/etc/testservice.keytab KVNO Timestamp Principal ---- ------------------- ------------------------------------------------------ 2 04/01/2020 17:52:55 testservice/[email protected] (aes256-cts-hmac-sha1-96) 2 04/01/2020 17:52:55 testservice/[email protected] (aes128-cts-hmac-sha1-96) 2 04/01/2020 17:52:55 testservice/[email protected] (camellia128-cts-cmac) 2 04/01/2020 17:52:55 testservice/[email protected] (camellia256-cts-cmac)",
"host /[email protected] HTTP /[email protected] ldap /[email protected] DNS /[email protected] cifs /[email protected]",
"ipa service-mod testservice/[email protected] --auth-ind otp --auth-ind pkinit ------------------------------------------------------------- Modified service \"testservice/[email protected]\" ------------------------------------------------------------- Principal name: testservice/[email protected] Principal alias: testservice/[email protected] Authentication Indicators: otp, pkinit Managed by: client.example.com",
"ipa service-mod testservice/[email protected] --auth-ind '' ------------------------------------------------------ Modified service \"testservice/[email protected]\" ------------------------------------------------------ Principal name: testservice/[email protected] Principal alias: testservice/[email protected] Managed by: client.example.com",
"ipa service-show testservice/client.example.com Principal name: testservice/[email protected] Principal alias: testservice/[email protected] Authentication Indicators: otp, pkinit Keytab: True Managed by: client.example.com",
"kvno -S testservice client.example.com testservice/[email protected]: kvno = 1",
"kdestroy",
"klist_ Ticket cache: KCM:1000 Default principal: [email protected] Valid starting Expires Service principal 04/01/2020 12:52:42 04/02/2020 12:52:39 krbtgt/[email protected] 04/01/2020 12:54:07 04/02/2020 12:52:39 testservice/[email protected]",
"ipa krbtpolicy-mod --maxlife= USD((8*60*60)) --maxrenew= USD((24*60*60)) Max life: 28800 Max renew: 86400",
"ipa krbtpolicy-reset Max life: 86400 Max renew: 604800",
"ipa krbtpolicy-show Max life: 28800 Max renew: 86640",
"ipa krbtpolicy-mod --otp-maxlife= 604800 --otp-maxrenew= 604800 --pkinit-maxlife= 172800 --pkinit-maxrenew= 172800",
"ipa krbtpolicy-show Max life: 86400 OTP max life: 604800 PKINIT max life: 172800 Max renew: 604800 OTP max renew: 604800 PKINIT max renew: 172800",
"ipa krbtpolicy-mod admin --maxlife= 172800 --maxrenew= 1209600 Max life: 172800 Max renew: 1209600",
"ipa krbtpolicy-reset admin",
"ipa krbtpolicy-show admin Max life: 172800 Max renew: 1209600",
"ipa krbtpolicy-mod admin --otp-maxrenew=USD((2*24*60*60)) OTP max renew: 172800",
"ipa krbtpolicy-reset username",
"ipa krbtpolicy-show admin Max life: 28800 Max renew: 86640"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/managing-kerberos-ticket-policies_managing-users-groups-hosts |
Chapter 4. Executing the backup procedure | Chapter 4. Executing the backup procedure Before you perform a fast forward upgrade, back up the undercloud and the overcloud control plane nodes so that you can restore them to their state if an error occurs. Note Before you back up the undercloud and overcloud, ensure that you do not perform any operations on the overcloud from the undercloud. 4.1. Performing prerequisite tasks before backing up the undercloud Do not perform an undercloud backup when you deploy the undercloud or when you make changes to an existing undercloud. To prevent data corruptions, confirm that there are no stack failures, ongoing tasks, and that all OpenStack services except for mariadb are stopped before you back up the undercloud node. Procedure List failures for all available stacks: Verify that there are no ongoing tasks in the cloud: If the command returns no results, there are no ongoing tasks. Stop all OpenStack services in the cloud: Start the tripleo_mysql service: Verify that the tripleo_mysql service is running: 4.2. Backing up the undercloud To back up the undercloud node, you must log in as the root user on the undercloud node. As a precaution, you must back up the database to ensure that you can restore it. Prerequisites You have created and exported the backup directory. For more information, see Creating and exporting the backup directory . You have performed prerequisite tasks before backing up the undercloud. For more information, see Performing prerequisite tasks before backing up the undercloud . You have installed and configured ReaR on the undercloud node. For more information, see Install and Configure ReaR . Procedure Locate the database password. Back up the databases: Stop the mariadb database service: Create the backup: You can find the backup ISO file that you create with ReaR on the backup node in the /ctl_plane_backups directory. 4.3. Backing up the control plane To back up the control plane, you must first stop the pacemaker cluster and all containers operating on the control plane nodes. Do not operate the stack to ensure state consistency. After you complete the backup procedure, start the pacemaker cluster and the containers. As a precaution, you must back up the database to ensure that you can restore the database after you restart the pacemaker cluster and containers. Back up the control plane nodes simultaneously. Prerequisites You have created and exported the backup directory. For more information, see Creating and exporting the backup directory . You have installed and configured ReaR on the undercloud node. For more information, see Install and Configure ReaR . Procedure Locate the database password: Back up the databases: On one of control plane nodes, stop the pacemaker cluster: Important Do not operate the stack. When you stop the pacemaker cluster and the containers, this results in the temporary interruption of control plane services to Compute nodes. There is also disruption to network connectivity, Ceph, and the NFS data plane service. You cannot create instances, migrate instances, authenticate requests, or monitor the health of the cluster until the pacemaker cluster and the containers return to service following the final step of this procedure. On each control plane node, stop the containers. Stop the containers: Stop the [email protected] container: Stop the [email protected] container: To back up the control plane, run the following command as root in the command line interface of each control plane node: You can find the backup ISO file that you create with ReaR on the backup node in the /ctl_plane_backups directory. Note When you execute the backup command, you might see warning messages regarding the tar command and sockets that are ignored during the tar process, similar to the following: When the backup procedure generates ISO images for each of the control plane nodes, restart the pacemaker cluster and the containers: On one of the control plane nodes, enter the following command: On each control plane node, start the containers. Start the [email protected] container: Start the [email protected] container: | [
"(undercloud) [stack@undercloud-0 ~]USD source stackrc && for i in `openstack stack list -c 'Stack Name' -f value`; do openstack stack failures list USDi; done",
"(undercloud) [stack@undercloud-0 ~]USD openstack stack list --nested | grep -v \"_COMPLETE\"",
"systemctl stop tripleo_*",
"systemctl start tripleo_mysql",
"systemctl status tripleo_mysql",
"PASSWORD=USD(/bin/hiera -c /etc/puppet/hiera.yaml mysql::server::root_password)",
"podman exec mysql bash -c \"mysql -uroot -pUSDPASSWORD -s -N -e \\\"SELECT CONCAT('\\\\\\\"SHOW GRANTS FOR ''',user,'''@''',host,''';\\\\\\\"') FROM mysql.user where (length(user) > 0 and user NOT LIKE 'root')\\\" | xargs -n1 mysql -uroot -pUSDPASSWORD -s -N -e | sed 's/USD/;/' \" > openstack-backup-mysql-grants.sql",
"podman exec mysql bash -c \"mysql -uroot -pUSDPASSWORD -s -N -e \\\"select distinct table_schema from information_schema.tables where engine='innodb' and table_schema != 'mysql';\\\" | xargs mysqldump -uroot -pUSDPASSWORD --single-transaction --databases\" > openstack-backup-mysql.sql",
"systemctl stop tripleo_mysql",
"rear -d -v mkbackup",
"PASSWORD=USD(/bin/hiera -c /etc/puppet/hiera.yaml mysql::server::root_password)",
"podman exec galera-bundle-podman-X bash -c \"mysql -uroot -pUSDPASSWORD -s -N -e \\\"SELECT CONCAT('\\\\\\\"SHOW GRANTS FOR ''',user,'''@''',host,''';\\\\\\\"') FROM mysql.user where (length(user) > 0 and user NOT LIKE 'root')\\\" | xargs -n1 mysql -uroot -pUSDPASSWORD -s -N -e | sed 's/USD/;/' \" > openstack-backup-mysql-grants.sql",
"podman exec galera-bundle-podman-X bash -c \"mysql -uroot -pUSDPASSWORD -s -N -e \\\"select distinct table_schema from information_schema.tables where engine='innodb' and table_schema != 'mysql';\\\" | xargs mysqldump -uroot -pUSDPASSWORD --single-transaction --databases\" > openstack-backup-mysql.sql",
"pcs cluster stop --all",
"systemctl stop tripleo_*",
"sudo systemctl stop ceph-mon@USD(hostname -s)",
"sudo systemctl stop ceph-mgr@USD(hostname -s)",
"rear -d -v mkbackup",
"WARNING: tar ended with return code 1 and below output: ---snip--- tar: /var/spool/postfix/public/qmgr: socket ignored This message indicates that files have been modified during the archiving process and the backup might be inconsistent. Relax-and-Recover continues to operate, however, it is important that you verify the backup to ensure that you can use this backup to recover your system.",
"pcs cluster start --all",
"systemctl start ceph-mon@USD(hostname -s)",
"systemctl start ceph-mgr@USD(hostname -s)"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/undercloud_and_control_plane_back_up_and_restore/execute-the-back-up-procedure-osp-ctlplane-br |
Chapter 1. Updating clusters overview | Chapter 1. Updating clusters overview You can update an OpenShift Container Platform 4 cluster with a single operation by using the web console or the OpenShift CLI ( oc ). 1.1. Understanding OpenShift Container Platform updates About the OpenShift Update Service : For clusters with internet access, Red Hat provides over-the-air updates by using an OpenShift Container Platform update service as a hosted service located behind public APIs. 1.2. Understanding update channels and releases Update channels and releases : With update channels, you can choose an update strategy. Update channels are specific to a minor version of OpenShift Container Platform. Update channels only control release selection and do not impact the version of the cluster that you install. The openshift-install binary file for a specific version of the OpenShift Container Platform always installs that minor version. For more information, see the following: Upgrading version paths Understanding fast and stable channel use and strategies Understanding restricted network clusters Switching between channels Understanding conditional updates 1.3. Understanding cluster Operator condition types The status of cluster Operators includes their condition type, which informs you of the current state of your Operator's health. The following definitions cover a list of some common ClusterOperator condition types. Operators that have additional condition types and use Operator-specific language have been omitted. The Cluster Version Operator (CVO) is responsible for collecting the status conditions from cluster Operators so that cluster administrators can better understand the state of the OpenShift Container Platform cluster. Available: The condition type Available indicates that an Operator is functional and available in the cluster. If the status is False , at least one part of the operand is non-functional and the condition requires an administrator to intervene. Progressing: The condition type Progressing indicates that an Operator is actively rolling out new code, propagating configuration changes, or otherwise moving from one steady state to another. Operators do not report the condition type Progressing as True when they are reconciling a known state. If the observed cluster state has changed and the Operator is reacting to it, then the status reports back as True , since it is moving from one steady state to another. Degraded: The condition type Degraded indicates that an Operator has a current state that does not match its required state over a period of time. The period of time can vary by component, but a Degraded status represents persistent observation of an Operator's condition. As a result, an Operator does not fluctuate in and out of the Degraded state. There might be a different condition type if the transition from one state to another does not persist over a long enough period to report Degraded . An Operator does not report Degraded during the course of a normal update. An Operator may report Degraded in response to a persistent infrastructure failure that requires eventual administrator intervention. Note This condition type is only an indication that something may need investigation and adjustment. As long as the Operator is available, the Degraded condition does not cause user workload failure or application downtime. Upgradeable: The condition type Upgradeable indicates whether the Operator is safe to update based on the current cluster state. The message field contains a human-readable description of what the administrator needs to do for the cluster to successfully update. The CVO allows updates when this condition is True , Unknown or missing. When the Upgradeable status is False , only minor updates are impacted, and the CVO prevents the cluster from performing impacted updates unless forced. 1.4. Understanding cluster version condition types The Cluster Version Operator (CVO) monitors cluster Operators and other components, and is responsible for collecting the status of both the cluster version and its Operators. This status includes the condition type, which informs you of the health and current state of the OpenShift Container Platform cluster. In addition to Available , Progressing , and Upgradeable , there are condition types that affect cluster versions and Operators. Failing: The cluster version condition type Failing indicates that a cluster cannot reach its desired state, is unhealthy, and requires an administrator to intervene. Invalid: The cluster version condition type Invalid indicates that the cluster version has an error that prevents the server from taking action. The CVO only reconciles the current state as long as this condition is set. RetrievedUpdates: The cluster version condition type RetrievedUpdates indicates whether or not available updates have been retrieved from the upstream update server. The condition is Unknown before retrieval, False if the updates either recently failed or could not be retrieved, or True if the availableUpdates field is both recent and accurate. ReleaseAccepted: The cluster version condition type ReleaseAccepted with a True status indicates that the requested release payload was successfully loaded without failure during image verification and precondition checking. ImplicitlyEnabledCapabilities: The cluster version condition type ImplicitlyEnabledCapabilities with a True status indicates that there are enabled capabilities that the user is not currently requesting through spec.capabilities . The CVO does not support disabling capabilities if any associated resources were previously managed by the CVO. 1.5. Preparing to perform an EUS-to-EUS update Preparing to perform an EUS-to-EUS update : Due to fundamental Kubernetes design, all OpenShift Container Platform updates between minor versions must be serialized. You must update from OpenShift Container Platform 4.10 to 4.11, and then to 4.12. You cannot update from OpenShift Container Platform 4.10 to 4.12 directly. However, if you want to update between two Extended Update Support (EUS) versions, you can do so by incurring only a single reboot of non-control plane hosts. For more information, see the following: Updating EUS-to-EUS 1.6. Updating a cluster using the web console Updating a cluster using the web console : You can update an OpenShift Container Platform cluster by using the web console. The following steps update a cluster within a minor version. You can use the same instructions for updating a cluster between minor versions. Performing a canary rollout update Pausing a MachineHealthCheck resource About updating OpenShift Container Platform on a single-node cluster Updating a cluster by using the web console Changing the update server by using the web console 1.7. Updating a cluster using the CLI Updating a cluster using the CLI : You can update an OpenShift Container Platform cluster within a minor version by using the OpenShift CLI ( oc ). The following steps update a cluster within a minor version. You can use the same instructions for updating a cluster between minor versions. Pausing a MachineHealthCheck resource About updating OpenShift Container Platform on a single-node cluster Updating a cluster by using the CLI Changing the update server by using the CLI 1.8. Performing a canary rollout update Performing a canary rollout update : By controlling the rollout of an update to the worker nodes, you can ensure that mission-critical applications stay available during the whole update, even if the update process causes your applications to fail. Depending on your organizational needs, you might want to update a small subset of worker nodes, evaluate cluster and workload health over a period of time, and then update the remaining nodes. This is referred to as a canary update. Alternatively, you might also want to fit worker node updates, which often requires a host reboot, into smaller defined maintenance windows when it is not possible to take a large maintenance window to update the entire cluster at one time. You can perform the following procedures: Creating machine configuration pools to perform a canary rollout update Pausing the machine configuration pools Performing the cluster update Unpausing the machine configuration pools Moving a node to the original machine configuration pool 1.9. Updating a cluster that includes RHEL compute machines Updating a cluster that includes RHEL compute machines : If your cluster contains Red Hat Enterprise Linux (RHEL) machines, you must perform additional steps to update those machines. You can perform the following procedures: Updating a cluster by using the web console Optional: Adding hooks to perform Ansible tasks on RHEL machines Updating RHEL compute machines in your cluster 1.10. Updating a cluster in a disconnected environment About cluster updates in a disconnected environment : If your mirror host cannot access both the internet and the cluster, you can mirror the images to a file system that is disconnected from that environment. You can then bring that host or removable media across that gap. If the local container registry and the cluster are connected to the mirror host of a registry, you can directly push the release images to the local registry. Preparing your mirror host Configuring credentials that allow images to be mirrored Mirroring the OpenShift Container Platform image repository Updating the disconnected cluster Configuring image registry repository mirroring Widening the scope of the mirror image catalog to reduce the frequency of cluster node reboots Installing the OpenShift Update Service Operator Creating an OpenShift Update Service application Deleting an OpenShift Update Service application Uninstalling the OpenShift Update Service Operator 1.11. Updating hardware on nodes running in vSphere Updating hardware on vSphere : You must ensure that your nodes running in vSphere are running on the hardware version supported by OpenShift Container Platform. Currently, hardware version 15 or later is supported for vSphere virtual machines in a cluster. For more information, see the following: Updating virtual hardware on vSphere Scheduling an update for virtual hardware on vSphere Important Version 4.13 of OpenShift Container Platform requires VMware virtual hardware version 15 or later. 1.12. Updating hosted control planes Updating hosted control planes : On hosted control planes for OpenShift Container Platform, updates are decoupled between the control plane and the nodes. Your service cluster provider, which is the user that hosts the cluster control planes, can manage the updates as needed. The hosted cluster handles control plane updates, and node pools handle node upgrades. For more information, see the following information: Updates for hosted control planes Updating node pools for hosted control planes | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/updating_clusters/updating-clusters-overview |
Migrating applications to Red Hat build of Quarkus 3.8 | Migrating applications to Red Hat build of Quarkus 3.8 Red Hat build of Quarkus 3.8 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/migrating_applications_to_red_hat_build_of_quarkus_3.8/index |
Chapter 9. Installing a cluster on Azure into a government region | Chapter 9. Installing a cluster on Azure into a government region In OpenShift Container Platform version 4.14, you can install a cluster on Microsoft Azure into a government region. To configure the government region, you modify parameters in the install-config.yaml file before you install the cluster. 9.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated government region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain long-term credentials . If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 9.2. Azure government regions OpenShift Container Platform supports deploying a cluster to Microsoft Azure Government (MAG) regions. MAG is specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads on Azure. MAG is composed of government-only data center regions, all granted an Impact Level 5 Provisional Authorization . Installing to a MAG region requires manually configuring the Azure Government dedicated cloud instance and region in the install-config.yaml file. You must also update your service principal to reference the appropriate government environment. Note The Azure government region cannot be selected using the guided terminal prompts from the installation program. You must define the region manually in the install-config.yaml file. Remember to also set the dedicated cloud instance, like AzureUSGovernmentCloud , based on the region specified. 9.3. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 9.3.1. Private clusters in Azure To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. Depending how your network connects to the private VNET, you might need to use a DNS forwarder to resolve the cluster's private DNS records. The cluster's machines use 168.63.129.16 internally for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address 168.63.129.16? in the Azure documentation. The cluster still requires access to internet to access the Azure APIs. The following items are not required or created when you install a private cluster: A BaseDomainResourceGroup , since the cluster does not create public records Public IP addresses Public DNS records Public endpoints 9.3.1.1. Limitations Private clusters on Azure are subject to only the limitations that are associated with the use of an existing VNet. 9.3.2. User-defined outbound routing In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the internet. This allows you to skip the creation of public IP addresses and the public load balancer. You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this. When configuring a cluster to use user-defined routing, the installation program does not create the following resources: Outbound rules for access to the internet. Public IPs for the public load balancer. Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests. You must ensure the following items are available before setting user-defined routing: Egress to the internet is possible to pull container images, unless using an OpenShift image registry mirror. The cluster can access Azure APIs. Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section. There are several pre-existing networking setups that are supported for internet access using user-defined routing. 9.4. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.14, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 9.4.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 9.4.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 9.1. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x If you are using Azure Firewall to restrict the internet access, then you can configure Azure Firewall to allow the Azure APIs . A network security group rule is not needed. Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Table 9.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If you configure an external NTP time server, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 9.3. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Additional resources About the OpenShift SDN network plugin Configuring your firewall 9.4.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 9.4.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 9.5. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 9.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 9.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 9.8. Manually creating the installation configuration file When installing OpenShift Container Platform on Microsoft Azure into a government region, you must manually generate your installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for Azure 9.8.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 9.4. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 9.8.2. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 9.1. Machine types based on 64-bit x86 architecture standardBSFamily standardDADSv5Family standardDASv4Family standardDASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHCSFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSMediumMemoryv2Family standardMIDSMediumMemoryv2Family standardMISMediumMemoryv2Family standardMSFamily standardMSMediumMemoryv2Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 9.8.3. Enabling trusted launch for Azure VMs You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules . See the Azure documentation about virtual machine sizes to learn what sizes of virtual machines support these features. Important Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 1 Specify controlPlane.platform.azure or compute.platform.azure to enable trusted launch on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to enable trusted launch on all nodes. 2 Enable trusted launch features. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 9.8.4. Enabling confidential VMs You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes. Important Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use confidential VMs with the following VM sizes: DCasv5-series DCadsv5-series ECasv5-series ECadsv5-series Important Confidential VMs are currently not supported on 64-bit ARM architectures. Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5 1 Specify controlPlane.platform.azure or compute.platform.azure to deploy confidential VMs on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to deploy confidential VMs on all nodes. 2 Enable confidential VMs. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 5 Specify VMGuestStateOnly to encrypt the VM guest state. 9.8.5. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: usgovvirginia resourceGroupName: existing_resource_group 14 networkResourceGroupName: vnet_resource_group 15 virtualNetwork: vnet 16 controlPlaneSubnet: control_plane_subnet 17 computeSubnet: compute_subnet 18 outboundType: UserDefinedRouting 19 cloudName: AzureUSGovernmentCloud 20 pullSecret: '{"auths": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 publish: Internal 24 1 10 21 Required. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The publisher , offer , sku , and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters. 13 Specify the name of the resource group that contains the DNS zone for your base domain. 14 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 15 If you use an existing VNet, specify the name of the resource group that contains it. 16 If you use an existing VNet, specify its name. 17 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 18 If you use an existing VNet, specify the name of the subnet to host the compute machines. 19 You can customize your own outbound routing. Configuring user-defined routing prevents exposing external endpoints in your cluster. User-defined routing for egress requires deploying your cluster to an existing VNet. 20 Specify the name of the Azure cloud environment to deploy your cluster to. Set AzureUSGovernmentCloud to deploy to a Microsoft Azure Government (MAG) region. The default value is AzurePublicCloud . 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 24 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 9.8.6. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 9.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity,leave this value blank. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 9.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 9.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 9.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 9.13. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: usgovvirginia resourceGroupName: existing_resource_group 14 networkResourceGroupName: vnet_resource_group 15 virtualNetwork: vnet 16 controlPlaneSubnet: control_plane_subnet 17 computeSubnet: compute_subnet 18 outboundType: UserDefinedRouting 19 cloudName: AzureUSGovernmentCloud 20 pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 publish: Internal 24",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_azure/installing-azure-government-region |
Chapter 84. Kubernetes Services | Chapter 84. Kubernetes Services Since Camel 2.17 Both producer and consumer are supported The Kubernetes Services component is one of the Kubernetes Components which provides a producer to execute Kubernetes Service operations and a consumer to consume events related to Service objects. 84.1. Dependencies When using kubernetes-services with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 84.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 84.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 84.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 84.3. Component Options The Kubernetes Services component supports 4 options, which are listed below. Name Description Default Type kubernetesClient (common) Autowired To use an existing kubernetes client. KubernetesClient bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 84.4. Endpoint Options The Kubernetes Services endpoint is configured using URI syntax: with the following path and query parameters: 84.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (common) Required Kubernetes Master url. String 84.4.2. Query Parameters (33 parameters) Name Description Default Type apiVersion (common) The Kubernetes API Version to use. String dnsDomain (common) The dns domain, used for ServiceCall EIP. String kubernetesClient (common) Default KubernetesClient to use if provided. KubernetesClient namespace (common) The namespace. String portName (common) The port name, used for ServiceCall EIP. String portProtocol (common) The port protocol, used for ServiceCall EIP. tcp String crdGroup (consumer) The Consumer CRD Resource Group we would like to watch. String crdName (consumer) The Consumer CRD Resource name we would like to watch. String crdPlural (consumer) The Consumer CRD Resource Plural we would like to watch. String crdScope (consumer) The Consumer CRD Resource Scope we would like to watch. String crdVersion (consumer) The Consumer CRD Resource Version we would like to watch. String labelKey (consumer) The Consumer Label key when watching at some resources. String labelValue (consumer) The Consumer Label value when watching at some resources. String poolSize (consumer) The Consumer pool size. 1 int resourceName (consumer) The Consumer Resource Name we would like to watch. String bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern operation (producer) Producer operation to do on Kubernetes. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 84.5. Message Headers The Kubernetes Services component supports 7 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesServiceLabels (producer) Constant: KUBERNETES_SERVICE_LABELS The service labels. Map CamelKubernetesServiceName (producer) Constant: KUBERNETES_SERVICE_NAME The service name. String CamelKubernetesServiceSpec (producer) Constant: KUBERNETES_SERVICE_SPEC The spec of a service. ServiceSpec CamelKubernetesEventAction (consumer) Constant: KUBERNETES_EVENT_ACTION Action watched by the consumer. Enum values: ADDED MODIFIED DELETED ERROR BOOKMARK Action CamelKubernetesEventTimestamp (consumer) Constant: KUBERNETES_EVENT_TIMESTAMP Timestamp of the action watched by the consumer. long 84.6. Supported producer operation listServices listServicesByLabels getService createService deleteService 84.7. Kubernetes Services Producer Examples listServices: this operation list the services on a kubernetes cluster. from("direct:list"). toF("kubernetes-services:///?kubernetesClient=#kubernetesClient&operation=listServices"). to("mock:result"); This operation returns a List of services from your cluster. listServicesByLabels: this operation list the deployments by labels on a kubernetes cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_SERVICE_LABELS, labels); } }); toF("kubernetes-services:///?kubernetesClient=#kubernetesClient&operation=listServicesByLabels"). to("mock:result"); This operation returns a List of Services from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 84.8. Kubernetes Services Consumer Example fromF("kubernetes-services://%s?oauthToken=%s&namespace=default&resourceName=test", host, authToken).process(new KubernertesProcessor()).to("mock:result"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); Service sv = exchange.getIn().getBody(Service.class); log.info("Got event with configmap name: " + sv.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } } This consumer returns a list of events on the namespace default for the service test. 84.8.1. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>",
"kubernetes-services:masterUrl",
"from(\"direct:list\"). toF(\"kubernetes-services:///?kubernetesClient=#kubernetesClient&operation=listServices\"). to(\"mock:result\");",
"from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_SERVICE_LABELS, labels); } }); toF(\"kubernetes-services:///?kubernetesClient=#kubernetesClient&operation=listServicesByLabels\"). to(\"mock:result\");",
"fromF(\"kubernetes-services://%s?oauthToken=%s&namespace=default&resourceName=test\", host, authToken).process(new KubernertesProcessor()).to(\"mock:result\"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); Service sv = exchange.getIn().getBody(Service.class); log.info(\"Got event with configmap name: \" + sv.getMetadata().getName() + \" and action \" + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } }"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-services-component-starter |
B.43.3. RHSA-2011:0452 - Important: libtiff security update | B.43.3. RHSA-2011:0452 - Important: libtiff security update Updated libtiff packages that fix one security issue are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link(s) associated with each description below. The libtiff packages contain a library of functions for manipulating Tagged Image File Format (TIFF) files. CVE-2009-5022 A heap-based buffer overflow flaw was found in the way libtiff processed certain TIFF image files that were compressed with the JPEG compression algorithm. An attacker could use this flaw to create a specially-crafted TIFF file that, when opened, would cause an application linked against libtiff to crash or, possibly, execute arbitrary code. All libtiff users should upgrade to these updated packages, which contain a backported patch to resolve this issue. All running applications linked against libtiff must be restarted for this update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rhsa-2011-0452 |
Chapter 13. Optional: Installing on Nutanix | Chapter 13. Optional: Installing on Nutanix If you install OpenShift Container Platform on Nutanix, the Assisted Installer can integrate the OpenShift Container Platform cluster with the Nutanix platform, which exposes the Machine API to Nutanix and enables autoscaling and dynamically provisioning storage containers with the Nutanix Container Storage Interface (CSI). 13.1. Adding hosts on Nutanix with the UI To add hosts on Nutanix with the user interface (UI), generate the discovery image ISO from the Assisted Installer. Use the minimal discovery image ISO. This is the default setting. The image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size. After this is complete, you must create an image for the Nutanix platform and create the Nutanix virtual machines. Prerequisites You have created a cluster profile in the Assisted Installer UI. You have a Nutanix cluster environment set up, and made a note of the cluster name and subnet name. Procedure In Cluster details , select Nutanix from the Integrate with external partner platforms dropdown list. The Include custom manifest checkbox is optional. In Host discovery, click the Add hosts button. Optional: Add an SSH public key so that you can connect to the Nutanix VMs as the core user. Having a login to the cluster hosts can provide you with debugging information during the installation. If you do not have an existing SSH key pair on your local machine, follow the steps in Generating a key pair for cluster node SSH access . In the SSH public key field, click Browse to upload the id_rsa.pub file containing the SSH public key. Alternatively, drag and drop the file into the field from the file manager. To see the file in the file manager, select Show hidden files in the menu. Select the desired provisioning type. Note Minimal image file: Provision with virtual media downloads a smaller image that will fetch the data needed to boot. In Networking , select Cluster-managed networking . Nutanix does not support User-managed networking . Optional: If the cluster hosts are behind a firewall that requires the use of a proxy, select Configure cluster-wide proxy settings . Enter the username, password, IP address and port for the HTTP and HTTPS URLs of the proxy server. Optional: Configure the discovery image if you want to boot it with an ignition file. See Configuring the discovery image for additional details. Click Generate Discovery ISO . Copy the Discovery ISO URL . In the Nutanix Prism UI, follow the directions to upload the discovery image from the Assisted Installer . In the Nutanix Prism UI, create the control plane (master) VMs through Prism Central . Enter the Name . For example, control-plane or master . Enter the Number of VMs . This should be 3 for the control plane. Ensure the remaining settings meet the minimum requirements for control plane hosts. In the Nutanix Prism UI, create the worker VMs through Prism Central . Enter the Name . For example, worker . Enter the Number of VMs . You should create at least 2 worker nodes. Ensure the remaining settings meet the minimum requirements for worker hosts. Return to the Assisted Installer user interface and wait until the Assisted Installer discovers the hosts and each of them have a Ready status. Continue with the installation procedure. 13.2. Adding hosts on Nutanix with the API To add hosts on Nutanix with the API, generate the discovery image ISO from the Assisted Installer. Use the minimal discovery image ISO. This is the default setting. The image includes only what is required to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size. Once this is complete, you must create an image for the Nutanix platform and create the Nutanix virtual machines. Prerequisites You have set up the Assisted Installer API authentication. You have created an Assisted Installer cluster profile. You have created an Assisted Installer infrastructure environment. You have your infrastructure environment ID exported in your shell as USDINFRA_ENV_ID . You have completed the Assisted Installer cluster configuration. You have a Nutanix cluster environment set up, and made a note of the cluster name and subnet name. Procedure Configure the discovery image if you want it to boot with an ignition file. Create a Nutanix cluster configuration file to hold the environment variables: USD touch ~/nutanix-cluster-env.sh USD chmod +x ~/nutanix-cluster-env.sh If you have to start a new terminal session, you can reload the environment variables easily. For example: USD source ~/nutanix-cluster-env.sh Assign the Nutanix cluster's name to the NTX_CLUSTER_NAME environment variable in the configuration file: USD cat << EOF >> ~/nutanix-cluster-env.sh export NTX_CLUSTER_NAME=<cluster_name> EOF Replace <cluster_name> with the name of the Nutanix cluster. Assign the Nutanix cluster's subnet name to the NTX_SUBNET_NAME environment variable in the configuration file: USD cat << EOF >> ~/nutanix-cluster-env.sh export NTX_SUBNET_NAME=<subnet_name> EOF Replace <subnet_name> with the name of the Nutanix cluster's subnet. Refresh the API token: USD source refresh-token Get the download URL: USD curl -H "Authorization: Bearer USD{API_TOKEN}" \ https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/downloads/image-url Create the Nutanix image configuration file: USD cat << EOF > create-image.json { "spec": { "name": "ocp_ai_discovery_image.iso", "description": "ocp_ai_discovery_image.iso", "resources": { "architecture": "X86_64", "image_type": "ISO_IMAGE", "source_uri": "<image_url>", "source_options": { "allow_insecure_connection": true } } }, "metadata": { "spec_version": 3, "kind": "image" } } EOF Replace <image_url> with the image URL downloaded from the step. Create the Nutanix image: USD curl -k -u <user>:'<password>' -X 'POST' \ 'https://<domain-or-ip>:<port>/api/nutanix/v3/images \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d @./create-image.json | jq '.metadata.uuid' Replace <user> with the Nutanix user name. Replace '<password>' with the Nutanix password. Replace <domain-or-ip> with the domain name or IP address of the Nutanix plaform. Replace <port> with the port for the Nutanix server. The port defaults to 9440 . Assign the returned UUID to the NTX_IMAGE_UUID environment variable in the configuration file: USD cat << EOF >> ~/nutanix-cluster-env.sh export NTX_IMAGE_UUID=<uuid> EOF Get the Nutanix cluster UUID: USD curl -k -u <user>:'<password>' -X 'POST' \ 'https://<domain-or-ip>:<port>/api/nutanix/v3/clusters/list' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "kind": "cluster" }' | jq '.entities[] | select(.spec.name=="<nutanix_cluster_name>") | .metadata.uuid' Replace <user> with the Nutanix user name. Replace '<password>' with the Nutanix password. Replace <domain-or-ip> with the domain name or IP address of the Nutanix plaform. Replace <port> with the port for the Nutanix server. The port defaults to 9440 . Replace <nutanix_cluster_name> with the name of the Nutanix cluster. Assign the returned Nutanix cluster UUID to the NTX_CLUSTER_UUID environment variable in the configuration file: USD cat << EOF >> ~/nutanix-cluster-env.sh export NTX_CLUSTER_UUID=<uuid> EOF Replace <uuid> with the returned UUID of the Nutanix cluster. Get the Nutanix cluster's subnet UUID: USD curl -k -u <user>:'<password>' -X 'POST' \ 'https://<domain-or-ip>:<port>/api/nutanix/v3/subnets/list' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "kind": "subnet", "filter": "name==<subnet_name>" }' | jq '.entities[].metadata.uuid' Replace <user> with the Nutanix user name. Replace '<password>' with the Nutanix password. Replace <domain-or-ip> with the domain name or IP address of the Nutanix plaform. Replace <port> with the port for the Nutanix server. The port defaults to 9440 . Replace <subnet_name> with the name of the cluster's subnet. Assign the returned Nutanix subnet UUID to the NTX_CLUSTER_UUID environment variable in the configuration file: USD cat << EOF >> ~/nutanix-cluster-env.sh export NTX_SUBNET_UUID=<uuid> EOF Replace <uuid> with the returned UUID of the cluster subnet. Ensure the Nutanix environment variables are set: USD source ~/nutanix-cluster-env.sh Create a VM configuration file for each Nutanix host. Create three control plane (master) VMs and at least two worker VMs. For example: USD touch create-master-0.json USD cat << EOF > create-master-0.json { "spec": { "name": "<host_name>", "resources": { "power_state": "ON", "num_vcpus_per_socket": 1, "num_sockets": 16, "memory_size_mib": 32768, "disk_list": [ { "disk_size_mib": 122880, "device_properties": { "device_type": "DISK" } }, { "device_properties": { "device_type": "CDROM" }, "data_source_reference": { "kind": "image", "uuid": "USDNTX_IMAGE_UUID" } } ], "nic_list": [ { "nic_type": "NORMAL_NIC", "is_connected": true, "ip_endpoint_list": [ { "ip_type": "DHCP" } ], "subnet_reference": { "kind": "subnet", "name": "USDNTX_SUBNET_NAME", "uuid": "USDNTX_SUBNET_UUID" } } ], "guest_tools": { "nutanix_guest_tools": { "state": "ENABLED", "iso_mount_state": "MOUNTED" } } }, "cluster_reference": { "kind": "cluster", "name": "USDNTX_CLUSTER_NAME", "uuid": "USDNTX_CLUSTER_UUID" } }, "api_version": "3.1.0", "metadata": { "kind": "vm" } } EOF Replace <host_name> with the name of the host. Boot each Nutanix virtual machine: USD curl -k -u <user>:'<password>' -X 'POST' \ 'https://<domain-or-ip>:<port>/api/nutanix/v3/vms' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d @./<vm_config_file_name> | jq '.metadata.uuid' Replace <user> with the Nutanix user name. Replace '<password>' with the Nutanix password. Replace <domain-or-ip> with the domain name or IP address of the Nutanix plaform. Replace <port> with the port for the Nutanix server. The port defaults to 9440 . Replace <vm_config_file_name> with the name of the VM configuration file. Assign the returned VM UUID to a unique environment variable in the configuration file: USD cat << EOF >> ~/nutanix-cluster-env.sh export NTX_MASTER_0_UUID=<uuid> EOF Replace <uuid> with the returned UUID of the VM. Note The environment variable must have a unique name for each VM. Wait until the Assisted Installer has discovered each VM and they have passed validation. USD curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID" --header "Content-Type: application/json" -H "Authorization: Bearer USDAPI_TOKEN" | jq '.enabled_host_count' Modify the cluster definition to enable integration with Nutanix: USD curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "platform_type":"nutanix" } ' | jq Continue with the installation procedure. 13.3. Nutanix post-installation configuration Follow the steps below to complete and validate the OpenShift Container Platform integration with the Nutanix cloud provider. Prerequisites The Assisted Installer has finished installing the cluster successfully. The cluster is connected to console.redhat.com . You have access to the Red Hat OpenShift Container Platform command line interface. 13.3.1. Updating the Nutanix configuration settings After installing OpenShift Container Platform on the Nutanix platform using the Assisted Installer, you must update the following Nutanix configuration settings manually: <prismcentral_username> : The Nutanix Prism Central username. <prismcentral_password> : The Nutanix Prism Central password. <prismcentral_address> : The Nutanix Prism Central address. <prismcentral_port> : The Nutanix Prism Central port. <prismelement_username> : The Nutanix Prism Element username. <prismelement_password> : The Nutanix Prism Element password. <prismelement_address> : The Nutanix Prism Element address. <prismelement_port> : The Nutanix Prism Element port. <prismelement_clustername> : The Nutanix Prism Element cluster name. <nutanix_storage_container> : The Nutanix Prism storage container. Procedure In the OpenShift Container Platform command line interface, update the Nutanix cluster configuration settings: USD oc patch infrastructure/cluster --type=merge --patch-file=/dev/stdin <<-EOF { "spec": { "platformSpec": { "nutanix": { "prismCentral": { "address": "<prismcentral_address>", "port": <prismcentral_port> }, "prismElements": [ { "endpoint": { "address": "<prismelement_address>", "port": <prismelement_port> }, "name": "<prismelement_clustername>" } ] }, "type": "Nutanix" } } } EOF Sample output infrastructure.config.openshift.io/cluster patched For additional details, see Creating a machine set on Nutanix . Create the Nutanix secret: USD cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: nutanix-credentials namespace: openshift-machine-api type: Opaque stringData: credentials: | [{"type":"basic_auth","data":{"prismCentral":{"username":"USD{<prismcentral_username>}","password":"USD{<prismcentral_password>}"},"prismElements":null}}] EOF Sample output secret/nutanix-credentials created When installing OpenShift Container Platform version 4.13 or later, update the Nutanix cloud provider configuration: Get the Nutanix cloud provider configuration YAML file: USD oc get cm cloud-provider-config -o yaml -n openshift-config > cloud-provider-config-backup.yaml Create a backup of the configuration file: USD cp cloud-provider-config_backup.yaml cloud-provider-config.yaml Open the configuration YAML file: USD vi cloud-provider-config.yaml Edit the configuration YAML file as follows: kind: ConfigMap apiVersion: v1 metadata: name: cloud-provider-config namespace: openshift-config data: config: | { "prismCentral": { "address": "<prismcentral_address>", "port":<prismcentral_port>, "credentialRef": { "kind": "Secret", "name": "nutanix-credentials", "namespace": "openshift-cloud-controller-manager" } }, "topologyDiscovery": { "type": "Prism", "topologyCategories": null }, "enableCustomLabeling": true } Apply the configuration updates: USD oc apply -f cloud-provider-config.yaml Sample output Warning: resource configmaps/cloud-provider-config is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by oc apply. oc apply should only be used on resources created declaratively by either oc create --save-config or oc apply. The missing annotation will be patched automatically. configmap/cloud-provider-config configured 13.3.2. Creating the Nutanix CSI Operator group Create an Operator group for the Nutanix CSI Operator. Note For a description of operator groups and related concepts, see Common Operator Framework Terms in Additional Resources . Procedure Open the Nutanix CSI Operator Group YAML file: USD vi openshift-cluster-csi-drivers-operator-group.yaml Edit the YAML file as follows: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-cluster-csi-drivers namespace: openshift-cluster-csi-drivers spec: targetNamespaces: - openshift-cluster-csi-drivers upgradeStrategy: Default Create the Operator Group: USD oc create -f openshift-cluster-csi-drivers-operator-group.yaml Sample output 13.3.3. Installing the Nutanix CSI Operator The Nutanix Container Storage Interface (CSI) Operator for Kubernetes deploys and manages the Nutanix CSI Driver. Note For instructions on performing this step through the Red Hat OpenShift Container Platform, see the Installing the Operator section of the Nutanix CSI Operator document in Additional Resources . Procedure Get the parameter values for the Nutanix CSI Operator YAML file: Check that the Nutanix CSI Operator exists: USD oc get packagemanifests | grep nutanix Sample output Assign the default channel for the Operator to a BASH variable: USD DEFAULT_CHANNEL=USD(oc get packagemanifests nutanixcsioperator -o jsonpath={.status.defaultChannel}) Assign the starting cluster service version (CSV) for the Operator to a BASH variable: USD STARTING_CSV=USD(oc get packagemanifests nutanixcsioperator -o jsonpath=\{.status.channels[*].currentCSV\}) Assign the catalog source for the subscription to a BASH variable: USD CATALOG_SOURCE=USD(oc get packagemanifests nutanixcsioperator -o jsonpath=\{.status.catalogSource\}) Assign the Nutanix CSI Operator source namespace to a BASH variable: USD SOURCE_NAMESPACE=USD(oc get packagemanifests nutanixcsioperator -o jsonpath=\{.status.catalogSourceNamespace\}) Create the Nutanix CSI Operator YAML file using the BASH variables: USD cat << EOF > nutanixcsioperator.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nutanixcsioperator namespace: openshift-cluster-csi-drivers spec: channel: USDDEFAULT_CHANNEL installPlanApproval: Automatic name: nutanixcsioperator source: USDCATALOG_SOURCE sourceNamespace: USDSOURCE_NAMESPACE startingCSV: USDSTARTING_CSV EOF Create the CSI Nutanix Operator: USD oc apply -f nutanixcsioperator.yaml Sample output subscription.operators.coreos.com/nutanixcsioperator created Run the following command until the Operator subscription state changes to AtLatestKnown . This indicates that the Operator subscription has been created, and may take some time. USD oc get subscription nutanixcsioperator -n openshift-cluster-csi-drivers -o 'jsonpath={..status.state}' 13.3.4. Deploying the Nutanix CSI storage driver The Nutanix Container Storage Interface (CSI) Driver for Kubernetes provides scalable and persistent storage for stateful applications. Note For instructions on performing this step through the Red Hat OpenShift Container Platform, see the Installing the CSI Driver using the Operator section of the Nutanix CSI Operator document in Additional Resources . Procedure Create a NutanixCsiStorage resource to deploy the driver: USD cat <<EOF | oc create -f - apiVersion: crd.nutanix.com/v1alpha1 kind: NutanixCsiStorage metadata: name: nutanixcsistorage namespace: openshift-cluster-csi-drivers spec: {} EOF Sample output snutanixcsistorage.crd.nutanix.com/nutanixcsistorage created Create a Nutanix secret YAML file for the CSI storage driver: USD cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: ntnx-secret namespace: openshift-cluster-csi-drivers stringData: # prism-element-ip:prism-port:admin:password key: <prismelement_address:prismelement_port:prismcentral_username:prismcentral_password> 1 EOF Note 1 Replace these parameters with actual values while keeping the same format. Sample output secret/nutanix-secret created 13.3.5. Validating the post-installation configurations Run the following steps to validate the configuration. Procedure Verify that you can create a storage class: USD cat <<EOF | oc create -f - kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: nutanix-volume annotations: storageclass.kubernetes.io/is-default-class: 'true' provisioner: csi.nutanix.com parameters: csi.storage.k8s.io/fstype: ext4 csi.storage.k8s.io/provisioner-secret-namespace: openshift-cluster-csi-drivers csi.storage.k8s.io/provisioner-secret-name: ntnx-secret storageContainer: <nutanix_storage_container> 1 csi.storage.k8s.io/controller-expand-secret-name: ntnx-secret csi.storage.k8s.io/node-publish-secret-namespace: openshift-cluster-csi-drivers storageType: NutanixVolumes csi.storage.k8s.io/node-publish-secret-name: ntnx-secret csi.storage.k8s.io/controller-expand-secret-namespace: openshift-cluster-csi-drivers reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: Immediate EOF Note 1 Take <nutanix_storage_container> from the Nutanix configuration; for example, SelfServiceContainer. Sample output storageclass.storage.k8s.io/nutanix-volume created Verify that you can create the Nutanix persistent volume claim (PVC): Create the persistent volume claim (PVC): USD cat <<EOF | oc create -f - kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nutanix-volume-pvc namespace: openshift-cluster-csi-drivers annotations: volume.beta.kubernetes.io/storage-provisioner: csi.nutanix.com volume.kubernetes.io/storage-provisioner: csi.nutanix.com finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: nutanix-volume volumeMode: Filesystem EOF Sample output persistentvolumeclaim/nutanix-volume-pvc created Validate that the persistent volume claim (PVC) status is Bound: USD oc get pvc -n openshift-cluster-csi-drivers Sample output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nutanix-volume-pvc Bound nutanix-volume 52s Additional resources Creating a machine set on Nutanix . Nutanix CSI Operator Storage Management Common Operator Framework Terms | [
"touch ~/nutanix-cluster-env.sh",
"chmod +x ~/nutanix-cluster-env.sh",
"source ~/nutanix-cluster-env.sh",
"cat << EOF >> ~/nutanix-cluster-env.sh export NTX_CLUSTER_NAME=<cluster_name> EOF",
"cat << EOF >> ~/nutanix-cluster-env.sh export NTX_SUBNET_NAME=<subnet_name> EOF",
"source refresh-token",
"curl -H \"Authorization: Bearer USD{API_TOKEN}\" https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/downloads/image-url",
"cat << EOF > create-image.json { \"spec\": { \"name\": \"ocp_ai_discovery_image.iso\", \"description\": \"ocp_ai_discovery_image.iso\", \"resources\": { \"architecture\": \"X86_64\", \"image_type\": \"ISO_IMAGE\", \"source_uri\": \"<image_url>\", \"source_options\": { \"allow_insecure_connection\": true } } }, \"metadata\": { \"spec_version\": 3, \"kind\": \"image\" } } EOF",
"curl -k -u <user>:'<password>' -X 'POST' 'https://<domain-or-ip>:<port>/api/nutanix/v3/images -H 'accept: application/json' -H 'Content-Type: application/json' -d @./create-image.json | jq '.metadata.uuid'",
"cat << EOF >> ~/nutanix-cluster-env.sh export NTX_IMAGE_UUID=<uuid> EOF",
"curl -k -u <user>:'<password>' -X 'POST' 'https://<domain-or-ip>:<port>/api/nutanix/v3/clusters/list' -H 'accept: application/json' -H 'Content-Type: application/json' -d '{ \"kind\": \"cluster\" }' | jq '.entities[] | select(.spec.name==\"<nutanix_cluster_name>\") | .metadata.uuid'",
"cat << EOF >> ~/nutanix-cluster-env.sh export NTX_CLUSTER_UUID=<uuid> EOF",
"curl -k -u <user>:'<password>' -X 'POST' 'https://<domain-or-ip>:<port>/api/nutanix/v3/subnets/list' -H 'accept: application/json' -H 'Content-Type: application/json' -d '{ \"kind\": \"subnet\", \"filter\": \"name==<subnet_name>\" }' | jq '.entities[].metadata.uuid'",
"cat << EOF >> ~/nutanix-cluster-env.sh export NTX_SUBNET_UUID=<uuid> EOF",
"source ~/nutanix-cluster-env.sh",
"touch create-master-0.json",
"cat << EOF > create-master-0.json { \"spec\": { \"name\": \"<host_name>\", \"resources\": { \"power_state\": \"ON\", \"num_vcpus_per_socket\": 1, \"num_sockets\": 16, \"memory_size_mib\": 32768, \"disk_list\": [ { \"disk_size_mib\": 122880, \"device_properties\": { \"device_type\": \"DISK\" } }, { \"device_properties\": { \"device_type\": \"CDROM\" }, \"data_source_reference\": { \"kind\": \"image\", \"uuid\": \"USDNTX_IMAGE_UUID\" } } ], \"nic_list\": [ { \"nic_type\": \"NORMAL_NIC\", \"is_connected\": true, \"ip_endpoint_list\": [ { \"ip_type\": \"DHCP\" } ], \"subnet_reference\": { \"kind\": \"subnet\", \"name\": \"USDNTX_SUBNET_NAME\", \"uuid\": \"USDNTX_SUBNET_UUID\" } } ], \"guest_tools\": { \"nutanix_guest_tools\": { \"state\": \"ENABLED\", \"iso_mount_state\": \"MOUNTED\" } } }, \"cluster_reference\": { \"kind\": \"cluster\", \"name\": \"USDNTX_CLUSTER_NAME\", \"uuid\": \"USDNTX_CLUSTER_UUID\" } }, \"api_version\": \"3.1.0\", \"metadata\": { \"kind\": \"vm\" } } EOF",
"curl -k -u <user>:'<password>' -X 'POST' 'https://<domain-or-ip>:<port>/api/nutanix/v3/vms' -H 'accept: application/json' -H 'Content-Type: application/json' -d @./<vm_config_file_name> | jq '.metadata.uuid'",
"cat << EOF >> ~/nutanix-cluster-env.sh export NTX_MASTER_0_UUID=<uuid> EOF",
"curl -s -X GET \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID\" --header \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.enabled_host_count'",
"curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"platform_type\":\"nutanix\" } ' | jq",
"oc patch infrastructure/cluster --type=merge --patch-file=/dev/stdin <<-EOF { \"spec\": { \"platformSpec\": { \"nutanix\": { \"prismCentral\": { \"address\": \"<prismcentral_address>\", \"port\": <prismcentral_port> }, \"prismElements\": [ { \"endpoint\": { \"address\": \"<prismelement_address>\", \"port\": <prismelement_port> }, \"name\": \"<prismelement_clustername>\" } ] }, \"type\": \"Nutanix\" } } } EOF",
"infrastructure.config.openshift.io/cluster patched",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: nutanix-credentials namespace: openshift-machine-api type: Opaque stringData: credentials: | [{\"type\":\"basic_auth\",\"data\":{\"prismCentral\":{\"username\":\"USD{<prismcentral_username>}\",\"password\":\"USD{<prismcentral_password>}\"},\"prismElements\":null}}] EOF",
"secret/nutanix-credentials created",
"oc get cm cloud-provider-config -o yaml -n openshift-config > cloud-provider-config-backup.yaml",
"cp cloud-provider-config_backup.yaml cloud-provider-config.yaml",
"vi cloud-provider-config.yaml",
"kind: ConfigMap apiVersion: v1 metadata: name: cloud-provider-config namespace: openshift-config data: config: | { \"prismCentral\": { \"address\": \"<prismcentral_address>\", \"port\":<prismcentral_port>, \"credentialRef\": { \"kind\": \"Secret\", \"name\": \"nutanix-credentials\", \"namespace\": \"openshift-cloud-controller-manager\" } }, \"topologyDiscovery\": { \"type\": \"Prism\", \"topologyCategories\": null }, \"enableCustomLabeling\": true }",
"oc apply -f cloud-provider-config.yaml",
"Warning: resource configmaps/cloud-provider-config is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by oc apply. oc apply should only be used on resources created declaratively by either oc create --save-config or oc apply. The missing annotation will be patched automatically. configmap/cloud-provider-config configured",
"vi openshift-cluster-csi-drivers-operator-group.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-cluster-csi-drivers namespace: openshift-cluster-csi-drivers spec: targetNamespaces: - openshift-cluster-csi-drivers upgradeStrategy: Default",
"oc create -f openshift-cluster-csi-drivers-operator-group.yaml",
"operatorgroup.operators.coreos.com/openshift-cluster-csi-driversjw9cd created",
"oc get packagemanifests | grep nutanix",
"nutanixcsioperator Certified Operators 129m",
"DEFAULT_CHANNEL=USD(oc get packagemanifests nutanixcsioperator -o jsonpath={.status.defaultChannel})",
"STARTING_CSV=USD(oc get packagemanifests nutanixcsioperator -o jsonpath=\\{.status.channels[*].currentCSV\\})",
"CATALOG_SOURCE=USD(oc get packagemanifests nutanixcsioperator -o jsonpath=\\{.status.catalogSource\\})",
"SOURCE_NAMESPACE=USD(oc get packagemanifests nutanixcsioperator -o jsonpath=\\{.status.catalogSourceNamespace\\})",
"cat << EOF > nutanixcsioperator.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nutanixcsioperator namespace: openshift-cluster-csi-drivers spec: channel: USDDEFAULT_CHANNEL installPlanApproval: Automatic name: nutanixcsioperator source: USDCATALOG_SOURCE sourceNamespace: USDSOURCE_NAMESPACE startingCSV: USDSTARTING_CSV EOF",
"oc apply -f nutanixcsioperator.yaml",
"subscription.operators.coreos.com/nutanixcsioperator created",
"oc get subscription nutanixcsioperator -n openshift-cluster-csi-drivers -o 'jsonpath={..status.state}'",
"cat <<EOF | oc create -f - apiVersion: crd.nutanix.com/v1alpha1 kind: NutanixCsiStorage metadata: name: nutanixcsistorage namespace: openshift-cluster-csi-drivers spec: {} EOF",
"snutanixcsistorage.crd.nutanix.com/nutanixcsistorage created",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: ntnx-secret namespace: openshift-cluster-csi-drivers stringData: # prism-element-ip:prism-port:admin:password key: <prismelement_address:prismelement_port:prismcentral_username:prismcentral_password> 1 EOF",
"secret/nutanix-secret created",
"cat <<EOF | oc create -f - kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: nutanix-volume annotations: storageclass.kubernetes.io/is-default-class: 'true' provisioner: csi.nutanix.com parameters: csi.storage.k8s.io/fstype: ext4 csi.storage.k8s.io/provisioner-secret-namespace: openshift-cluster-csi-drivers csi.storage.k8s.io/provisioner-secret-name: ntnx-secret storageContainer: <nutanix_storage_container> 1 csi.storage.k8s.io/controller-expand-secret-name: ntnx-secret csi.storage.k8s.io/node-publish-secret-namespace: openshift-cluster-csi-drivers storageType: NutanixVolumes csi.storage.k8s.io/node-publish-secret-name: ntnx-secret csi.storage.k8s.io/controller-expand-secret-namespace: openshift-cluster-csi-drivers reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: Immediate EOF",
"storageclass.storage.k8s.io/nutanix-volume created",
"cat <<EOF | oc create -f - kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nutanix-volume-pvc namespace: openshift-cluster-csi-drivers annotations: volume.beta.kubernetes.io/storage-provisioner: csi.nutanix.com volume.kubernetes.io/storage-provisioner: csi.nutanix.com finalizers: - kubernetes.io/pvc-protection spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: nutanix-volume volumeMode: Filesystem EOF",
"persistentvolumeclaim/nutanix-volume-pvc created",
"oc get pvc -n openshift-cluster-csi-drivers",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nutanix-volume-pvc Bound nutanix-volume 52s"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/assisted_installer_for_openshift_container_platform/assembly_installing-on-nutanix |
Chapter 6. Upgrading an Operator-based broker deployment | Chapter 6. Upgrading an Operator-based broker deployment The procedures in this section show how to upgrade: The AMQ Broker Operator version, using both the OpenShift command-line interface (CLI) and OperatorHub The broker container image for an Operator-based broker deployment 6.1. Before you begin This section describes some important considerations before you upgrade the Operator and broker container images for an Operator-based broker deployment. To upgrade an Operator-based broker deployment running on OpenShift Container Platform 3.11 to run on OpenShift Container Platform 4.5 or later, you must first upgrade your OpenShift Container Platform installation. Then, you must create a new Operator-based broker deployment that matches your existing deployment. To learn how to create a new Operator-based broker deployment, see Chapter 3, Deploying AMQ Broker on OpenShift Container Platform using the AMQ Broker Operator . Upgrading the Operator using either the OpenShift command-line interface (CLI) or OperatorHub requires cluster administrator privileges for your OpenShift cluster. If you originally used the CLI to install the Operator, you should also use the CLI to upgrade the Operator. If you originally used OperatorHub to install the Operator (that is, it appears under Operators Installed Operators for your project in the OpenShift Container Platform web console), you should also use OperatorHub to upgrade the Operator. For more information about these upgrade methods, see: Section 6.2, "Upgrading the Operator using the CLI" Section 6.3.3, "Upgrading the Operator using OperatorHub" 6.2. Upgrading the Operator using the CLI The procedures in this section show how to use the OpenShift command-line interface (CLI) to upgrade different versions of the Operator to the latest version available for AMQ Broker 7.8. 6.2.1. Prerequisites You should use the CLI to upgrade the Operator only if you originally used the CLI to install the Operator. If you originally used OperatorHub to install the Operator (that is, the Operator appears under Operators Installed Operators for your project in the OpenShift Container Platform web console), you should use OperatorHub to upgrade the Operator. To learn how to upgrade the Operator using OperatorHub, see Section 6.3, "Upgrading the Operator using OperatorHub" . 6.2.2. Upgrading version 0.19 of the Operator This procedure shows to how to use the OpenShift command-line interface (CLI) to upgrade version 0.19 of the Operator to the latest version for AMQ Broker 7.8. Procedure In your web browser, navigate to the Software Downloads page for AMQ Broker 7.8.5 patches . Ensure that the value of the Version drop-down list is set to 7.8.5 and the Patches tab is selected. to AMQ Broker 7.8.5 .3 Operator Installation and Example Files , click Download . Download of the amq-broker-operator-7.8.5-ocp-install-examples.zip compressed archive automatically begins. When the download has completed, move the archive to your chosen installation directory. The following example moves the archive to a directory called ~/broker/operator . mkdir ~/broker/operator mv amq-broker-operator-7.8.5-ocp-install-examples.zip ~/broker/operator In your chosen installation directory, extract the contents of the archive. For example: cd ~/broker/operator unzip amq-broker-operator-7.8.5-ocp-install-examples.zip Log in to OpenShift Container Platform as an administrator for the project that contains your existing Operator deployment. USD oc login -u <user> Switch to the OpenShift project in which you want to upgrade your Operator version. USD oc project <project-name> In the deploy directory of the latest Operator archive that you downloaded and extracted, open the operator.yaml file. Note In the operator.yaml file, the Operator uses an image that is represented by a Secure Hash Algorithm (SHA) value. The comment line, which begins with a number sign ( # ) symbol, denotes that the SHA value corresponds to a specific container image tag. Open the operator.yaml file for your Operator deployment. Check that any non-default values that you specified in your configuration are replicated in the new operator.yaml configuration file. If you have made any updates to the new operator.yaml file, save the file. Apply the updated Operator configuration. USD oc apply -f deploy/operator.yaml OpenShift updates your project to use the latest Operator version. To recreate your broker deployment, create a new CR yaml file to match the purpose of your original CR and apply it. Section 3.4.1, "Deploying a basic broker instance" . describes how to apply the deploy/crs/broker_activemqartemis_cr.yaml file in the Operator installation archive, you can use that file as a basis for your new CR yaml file. 6.2.3. Upgrading version 0.18 of the Operator This procedure shows to how to use the OpenShift command-line interface (CLI) to upgrade version 0.18 of the Operator (that is, the first version available for AMQ Broker 7.8) to the latest version for AMQ Broker 7.8. Procedure In your web browser, navigate to the Software Downloads page for AMQ Broker 7.8.5 patches . Ensure that the value of the Version drop-down list is set to 7.8.5 and the Patches tab is selected. to AMQ Broker 7.8.5 .3 Operator Installation and Example Files , click Download . Download of the amq-broker-operator-7.8.5-ocp-install-examples.zip compressed archive automatically begins. When the download has completed, move the archive to your chosen installation directory. The following example moves the archive to a directory called ~/broker/operator . mkdir ~/broker/operator mv amq-broker-operator-7.8.5-ocp-install-examples.zip ~/broker/operator In your chosen installation directory, extract the contents of the archive. For example: cd ~/broker/operator unzip amq-broker-operator-7.8.5-ocp-install-examples.zip Log in to OpenShift Container Platform as an administrator for the project that contains your existing Operator deployment. USD oc login -u <user> Switch to the OpenShift project in which you want to upgrade your Operator version. USD oc project <project-name> In the deploy directory of the latest Operator archive that you downloaded and extracted, open the operator.yaml file. Note In the operator.yaml file, the Operator uses an image that is represented by a Secure Hash Algorithm (SHA) value. The comment line, which begins with a number sign ( # ) symbol, denotes that the SHA value corresponds to a specific container image tag. Open the operator.yaml file for your Operator deployment. Check that any non-default values that you specified in your configuration are replicated in the new operator.yaml configuration file. Note The operator.yaml file for version 0.18 of the Operator includes environment variables whose names begin with BROKER_IMAGE . Do not replicate these environment variables in your new configuration. The latest version of the Operator for AMQ Broker 7.8 no longer uses these environment variables. If you have made any updates to the new operator.yaml file, save the file. Apply the updated Operator configuration. USD oc apply -f deploy/operator.yaml OpenShift updates your project to use the latest Operator version. To recreate your broker deployment, create a new CR yaml file to match the purpose of your original CR and apply it. Section 3.4.1, "Deploying a basic broker instance" . describes how to apply the deploy/crs/broker_activemqartemis_cr.yaml file in the Operator installation archive, you can use that file as a basis for your new CR yaml file. 6.2.4. Upgrading version 0.17 of the Operator This procedure shows to how to use the OpenShift command-line interface (CLI) to upgrade version 0.17 of the Operator (that is, the latest version available for AMQ Broker 7.7) to the latest version for AMQ Broker 7.8. Procedure In your web browser, navigate to the Software Downloads page for AMQ Broker 7.8.5 patches . Ensure that the value of the Version drop-down list is set to 7.8.5 and the Patches tab is selected. to AMQ Broker 7.8.5 .3 Operator Installation and Example Files , click Download . Download of the amq-broker-operator-7.8.5-ocp-install-examples.zip compressed archive automatically begins. When the download has completed, move the archive to your chosen installation directory. The following example moves the archive to a directory called ~/broker/operator . mkdir ~/broker/operator mv amq-broker-operator-7.8.5-ocp-install-examples.zip ~/broker/operator In your chosen installation directory, extract the contents of the archive. For example: cd ~/broker/operator unzip amq-broker-operator-7.8.5-ocp-install-examples.zip Log in to OpenShift Container Platform as a cluster administrator. For example: USD oc login -u system:admin Switch to the OpenShift project in which you want to upgrade your Operator version. USD oc project <project-name> Delete the main broker Custom Resource (CR) instance in your project. This also deletes the broker deployment. For example: USD oc delete -f deploy/crs/broker_activemqartemis_cr.yaml Update the main broker Custom Resource Definition (CRD) in your OpenShift cluster to the latest version. USD oc apply -f deploy/crds/broker_activemqartemis_crd.yaml Note You do not need to update your cluster with the latest versions of the CRDs for addressing or the scaledown controller. These CRDs are fully compatible with the ones included with the Operator version. In the deploy directory of the latest Operator archive that you downloaded and extracted, open the operator.yaml file. Note In the operator.yaml file, the Operator uses an image that is represented by a Secure Hash Algorithm (SHA) value. The comment line, which begins with a number sign ( # ) symbol, denotes that the SHA value corresponds to a specific container image tag. Open the operator.yaml file for your Operator deployment. Check that any non-default values that you specified in your configuration are replicated in the new operator.yaml configuration file. Note The operator.yaml file for version 0.17 of the Operator includes environment variables whose names begin with BROKER_IMAGE . Do not replicate these environment variables in your new configuration. The latest version of the Operator for AMQ Broker 7.8 no longer uses these environment variables. If you have made any updates to the new operator.yaml file, save the file. Apply the updated Operator configuration. USD oc apply -f deploy/operator.yaml OpenShift updates your project to use the latest Operator version. To recreate your broker deployment, create a new CR yaml file to match the purpose of your original CR and apply it. Section 3.4.1, "Deploying a basic broker instance" . describes how to apply the deploy/crs/broker_activemqartemis_cr.yaml file in the Operator installation archive, you can use that file as a basis for your new CR yaml file. 6.2.5. Upgrading version 0.15 of the Operator This procedure shows to how to use the OpenShift command-line interface (CLI) to upgrade version 0.15 of the Operator (that is, the first version available for AMQ Broker 7.7) to the latest version for AMQ Broker 7.8. Procedure In your web browser, navigate to the Software Downloads page for AMQ Broker 7.8.5 patches . Ensure that the value of the Version drop-down list is set to 7.8.5 and the Patches tab is selected. to AMQ Broker 7.8.5 .3 Operator Installation and Example Files , click Download . Download of the amq-broker-operator-7.8.5-ocp-install-examples.zip compressed archive automatically begins. When the download has completed, move the archive to your chosen installation directory. The following example moves the archive to a directory called ~/broker/operator . mkdir ~/broker/operator mv amq-broker-operator-7.8.5-ocp-install-examples.zip ~/broker/operator In your chosen installation directory, extract the contents of the archive. For example: cd ~/broker/operator unzip amq-broker-operator-7.8.5-ocp-install-examples.zip Log in to OpenShift Container Platform as a cluster administrator. For example: USD oc login -u system:admin Switch to the OpenShift project in which you want to upgrade your Operator version. USD oc project <project-name> Delete the main broker Custom Resource (CR) instance in your project. This also deletes the broker deployment. For example: USD oc delete -f deploy/crs/broker_activemqartemis_cr.yaml Update the main broker Custom Resource Definition (CRD) in your OpenShift cluster to the latest version. USD oc apply -f deploy/crds/broker_activemqartemis_crd.yaml Note You do not need to update your cluster with the latest versions of the CRDs for addressing or the scaledown controller. These CRDs are fully compatible with the ones included with the Operator version. In the deploy directory of the latest Operator archive that you downloaded and extracted, open the operator.yaml file. Note In the operator.yaml file, the Operator uses an image that is represented by a Secure Hash Algorithm (SHA) value. The comment line, which begins with a number sign ( # ) symbol, denotes that the SHA value corresponds to a specific container image tag. Open the operator.yaml file for your Operator deployment. Check that any non-default values that you specified in your configuration are replicated in the new operator.yaml configuration file. Note The operator.yaml file for version 0.15 of the Operator includes environment variables whose names begin with BROKER_IMAGE . Do not replicate these environment variables in your new configuration. The latest version of the Operator for AMQ Broker 7.8 no longer uses these environment variables. If you have made any updates to the new operator.yaml file, save the file. Apply the updated Operator configuration. USD oc apply -f deploy/operator.yaml OpenShift updates your project to use the latest Operator version. To recreate your broker deployment, create a new CR yaml file to match the purpose of your original CR and apply it. Section 3.4.1, "Deploying a basic broker instance" . describes how to apply the deploy/crs/broker_activemqartemis_cr.yaml file in the Operator installation archive, you can use that file as a basis for your new CR yaml file. 6.2.6. Upgrading version 0.13 of the Operator This procedure shows to how to use the OpenShift command-line interface (CLI) to upgrade version 0.13 of the Operator (that is, the version available for AMQ Broker 7.6) to the latest version for AMQ Broker 7.8. Procedure In your web browser, navigate to the Software Downloads page for AMQ Broker 7.8.5 patches . Ensure that the value of the Version drop-down list is set to 7.8.5 and the Patches tab is selected. to AMQ Broker 7.8.5 .3 Operator Installation and Example Files , click Download . Download of the amq-broker-operator-7.8.5-ocp-install-examples.zip compressed archive automatically begins. When the download has completed, move the archive to your chosen installation directory. The following example moves the archive to a directory called ~/broker/operator . mkdir ~/broker/operator mv amq-broker-operator-7.8.5-ocp-install-examples.zip ~/broker/operator In your chosen installation directory, extract the contents of the archive. For example: cd ~/broker/operator unzip amq-broker-operator-7.8.5-ocp-install-examples.zip Log in to OpenShift Container Platform as a cluster administrator. For example: USD oc login -u system:admin Switch to the OpenShift project in which you want to upgrade your Operator version. USD oc project <project-name> Delete the main broker Custom Resource (CR) instance in your project. This also deletes the broker deployment. For example: USD oc delete -f deploy/crs/broker_activemqartemis_cr.yaml Update the main broker Custom Resource Definition (CRD) in your OpenShift cluster to the latest version. USD oc apply -f deploy/crds/broker_activemqartemis_crd.yaml Update the address CRD in your OpenShift cluster to the latest version included with AMQ Broker 7.8. USD oc apply -f deploy/crds/broker_activemqartemisaddress_crd.yaml Note You do not need to update your cluster with the latest version of the CRD for the scaledown controller. In AMQ Broker 7.8, this CRD is fully compatible with the one that was included with the Operator for AMQ Broker 7.6. In the deploy directory of the latest Operator archive that you downloaded and extracted, open the operator.yaml file. Note In the operator.yaml file, the Operator uses an image that is represented by a Secure Hash Algorithm (SHA) value. The comment line, which begins with a number sign ( # ) symbol, denotes that the SHA value corresponds to a specific container image tag. Open the operator.yaml file for your Operator deployment. Check that any non-default values that you specified in your configuration are replicated in the new operator.yaml configuration file. If you have made any updates to the new operator.yaml file, save the file. Apply the updated Operator configuration. USD oc apply -f deploy/operator.yaml OpenShift updates your project to use the latest Operator version. 6.2.7. Upgrading version 0.9 of the Operator The following procedure shows how to use the OpenShift command-line interface (CLI) to upgrade version 0.9 of the Operator (that is, the version available for AMQ Broker 7.5 or the Long Term Support version available for AMQ Broker 7.4) to the latest version for AMQ Broker 7.8. Procedure In your web browser, navigate to the Software Downloads page for AMQ Broker 7.8.5 patches . Ensure that the value of the Version drop-down list is set to 7.8.5 and the Patches tab is selected. to AMQ Broker 7.8.5 .3 Operator Installation and Example Files , click Download . Download of the amq-broker-operator-7.8.5-ocp-install-examples.zip compressed archive automatically begins. When the download has completed, move the archive to your chosen installation directory. The following example moves the archive to a directory called ~/broker/operator . mkdir ~/broker/operator mv amq-broker-operator-7.8.5-ocp-install-examples.zip ~/broker/operator In your chosen installation directory, extract the contents of the archive. For example: cd ~/broker/operator unzip amq-broker-operator-7.8.5-ocp-install-examples.zip Log in to OpenShift Container Platform as a cluster administrator. For example: USD oc login -u system:admin Switch to the OpenShift project in which you want to upgrade your Operator version. USD oc project <project-name> Delete the main broker Custom Resource (CR) instance in your project. This also deletes the broker deployment. For example: USD oc delete -f deploy/crs/broker_v2alpha1_activemqartemis_cr.yaml Update the main broker Custom Resource Definition (CRD) in your OpenShift cluster to the latest version included with AMQ Broker 7.8. USD oc apply -f deploy/crds/broker_activemqartemis_crd.yaml Update the address CRD in your OpenShift cluster to the latest version included with AMQ Broker 7.8. USD oc apply -f deploy/crds/broker_activemqartemisaddress_crd.yaml Note You do not need to update your cluster with the latest version of the CRD for the scaledown controller. In AMQ Broker 7.8, this CRD is fully compatible with the one included with the Operator version. In the deploy directory of the latest Operator archive that you downloaded and extracted, open the operator.yaml file. Note In the operator.yaml file, the Operator uses an image that is represented by a Secure Hash Algorithm (SHA) value. The comment line, which begins with a number sign ( # ) symbol, denotes that the SHA value corresponds to a specific container image tag. Open the operator.yaml file for your Operator deployment. Check that any non-default values that you specified in your configuration are replicated in the new operator.yaml configuration file. If you have made any updates to the new operator.yaml file, save the file. Apply the updated Operator configuration. USD oc apply -f deploy/operator.yaml OpenShift updates your project to use the latest Operator version. To recreate your broker deployment, create a new CR yaml file to match the purpose of your original CR and apply it. Section 3.4.1, "Deploying a basic broker instance" . describes how to apply the deploy/crs/broker_activemqartemis_cr.yaml file in the Operator installation archive, you can use that file as a basis for your new CR yaml file. 6.3. Upgrading the Operator using OperatorHub This section describes how to use OperatorHub to upgrade different versions of the Operator to the latest version available for AMQ Broker 7.8. 6.3.1. Prerequisites You should use OperatorHub to upgrade the Operator only if you originally used OperatorHub to install the Operator (that is, the Operator appears under Operators Installed Operators for your project in the OpenShift Container Platform web console). By contrast, if you originally used the OpenShift command-line interface (CLI) to install the Operator, you should also use the CLI to upgrade the Operator. To learn how to upgrade the Operator using the CLI, see Section 6.2, "Upgrading the Operator using the CLI" . Upgrading the AMQ Broker Operator using OperatorHub requires cluster administrator privileges for your OpenShift cluster. 6.3.2. Before you begin This section describes some important considerations before you use OperatorHub to upgrade an instance of the AMQ Broker Operator. The Operator Lifecycle Manager automatically updates the CRDs in your OpenShift cluster when you install the latest Operator version from OperatorHub. You do not need to remove existing CRDs. When you update your cluster with the CRDs for the latest Operator version, this update affects all projects in the cluster. Any broker Pods deployed from versions of the Operator might become unable to update their status in the OpenShift Container Platform web console. When you click the Logs tab of a running broker Pod, you see messages indicating that 'UpdatePodStatus' has failed. However, the broker Pods and Operator in that project continue to work as expected. To fix this issue for an affected project, you must also upgrade that project to use the latest version of the Operator. 6.3.3. Upgrading the Operator using OperatorHub This procedure shows how to use OperatorHub to upgrade an instance of the AMQ Broker Operator. Procedure Log in to the OpenShift Container Platform web console as a cluster administrator. Delete the main Custom Resource (CR) instance for the broker deployment in your project. This action deletes the broker deployment. In the left navigation menu, click Administration Custom Resource Definitions . On the Custom Resource Definitions page, click the ActiveMQArtemis CRD. Click the Instances tab. Locate the CR instance that corresponds to your project namespace. For your CR instance, click the More Options icon (three vertical dots) on the right-hand side. Select Delete ActiveMQArtemis . Uninstall the existing AMQ Broker Operator from your project. In the left navigation menu, click Operators Installed Operators . From the Project drop-down menu at the top of the page, select the project in which you want to uninstall the Operator. Locate the Red Hat Integration - AMQ Broker instance that you want to uninstall. For your Operator instance, click the More Options icon (three vertical dots) on the right-hand side. Select Uninstall Operator . On the confirmation dialog box, click Uninstall . Use OperatorHub to install the latest version of the Operator for AMQ Broker 7.8. For more information, see Section 3.3.3, "Deploying the Operator from OperatorHub" . To recreate your broker deployment, create a new CR yaml file to match the purpose of your original CR and apply it. Section 3.4.1, "Deploying a basic broker instance" . describes how to apply the deploy/crs/broker_activemqartemis_cr.yaml file in the Operator installation archive, you can use that file as a basis for your new CR yaml file. 6.4. Upgrading the broker container image by specifying an AMQ Broker version The following procedure shows how to upgrade the broker container image for an Operator-based broker deployment by specifying an AMQ Broker version. You might do this, for example, if you upgrade the Operator to the latest version for AMQ Broker 7.8.5 but the spec.upgrades.enabled property in your CR is already set to true and the spec.version property specifies 7.7.0 or 7.8.0 . To upgrade the broker container image, you need to manually specify a new AMQ Broker version (for example, 7.8.5 ). When you specify a new version of AMQ Broker, the Operator automatically chooses the broker container image that corresponds to this version. Prerequisites You must be using the latest version of the Operator for 7.8.5. To learn how to upgrade the Operator to the latest version, see: Section 6.2, "Upgrading the Operator using the CLI" Section 6.3.3, "Upgrading the Operator using OperatorHub" . As described in Section 2.4, "How the Operator chooses container images" , if you deploy a CR and do not explicitly specify a broker container image, the Operator automatically chooses the appropriate container image to use. To use the upgrade process described in this section, you must use this default behavior. If you override the default behavior by directly specifying a broker container image in your CR, the Operator cannot automatically upgrade the broker container image to correspond to an AMQ Broker version as described below. Procedure Edit the main broker CR instance for the broker deployment. Using the OpenShift command-line interface: Log in to OpenShift as a user that has privileges to edit and deploy CRs in the project for the broker deployment. In a text editor, open the CR file that you used for your broker deployment. For example, this might be the broker_activemqartemis_cr.yaml file that was included in the deploy/crs directory of the Operator installation archive that you previously downloaded and extracted. Using the OpenShift Container Platform web console: Log in to the console as a user that has privileges to edit and deploy CRs in the project for the broker deployment. In the left pane, click Administration Custom Resource Definitions . Click the ActiveMQArtemis CRD. Click the Instances tab. Locate the CR instance that corresponds to your project namespace. For your CR instance, click the More Options icon (three vertical dots) on the right-hand side. Select Edit ActiveMQArtemis . Within the console, a YAML editor opens, enabling you to edit the CR instance. To specify a version of AMQ Broker to which to upgrade the broker container image, set a value for the spec.version property of the CR. For example: spec: version: 7.8.5 ... In the spec section of the CR, locate the upgrades section. If this section is not already included in the CR, add it. spec: version: 7.8.5 ... upgrades: Ensure that the upgrades section includes the enabled and minor properties. spec: version: 7.8.5 ... upgrades: enabled: minor: To enable an upgrade of the broker container image based on a specified version of AMQ Broker, set the value of the enabled property to true . spec: version: 7.8.5 ... upgrades: enabled: true minor: To define the upgrade behavior of the broker, set a value for the minor property. To allow upgrades between minor AMQ Broker versions, set the value of minor to true . spec: version: 7.8.5 ... upgrades: enabled: true minor: true For example, suppose that the current broker container image corresponds to 7.7.0 , and a new image, corresponding to the 7.8.5 version specified for spec.version , is available. In this case, the Operator determines that there is an available upgrade between the 7.7 and 7.8 minor versions. Based on the preceding settings, which allow upgrades between minor versions, the Operator upgrades the broker container image. By contrast, suppose that the current broker container image corresponds to 7.8.0 , and a new image, corresponding to the 7.8.5 version specified for spec.version , is available. In this case, the Operator determines that there is an available upgrade between 7.8.0 and 7.8.5 micro versions. Based on the preceding settings, which allow upgrades only between minor versions, the Operator does not upgrade the broker container image. To allow upgrades between micro AMQ Broker versions, set the value of minor to false . spec: version: 7.8.5 ... upgrades: enabled: true minor: false For example, suppose that the current broker container image corresponds to 7.7.0 , and a new image, corresponding to the 7.8.5 version specified for spec.version , is available. In this case, the Operator determines that there is an available upgrade between the 7.7 and 7.8 minor versions. Based on the preceding settings, which do not allow upgrades between minor versions (that is, only between micro versions), the Operator does not upgrade the broker container image. By contrast, suppose that the current broker container image corresponds to 7.8.0 , and a new image, corresponding to the 7.8.5 version specified for spec.version , is available. In this case, the Operator determines that there is an available upgrade between 7.8.0 and 7.8.5 micro versions. Based on the preceding settings, which allow upgrades between micro versions, the Operator upgrades the broker container image. Apply the changes to the CR. Using the OpenShift command-line interface: Save the CR file. Switch to the project for the broker deployment. Apply the CR. Using the OpenShift web console: When you have finished editing the CR, click Save . When you apply the CR change, the Operator first validates that an upgrade to the AMQ Broker version specified for spec.version is available for your existing deployment. If you have specified an invalid version of AMQ Broker to which to upgrade (for example, a version that is not yet available), the Operator logs a warning message, and takes no further action. However, if an upgrade to the specified version is available, and the values specified for upgrades.enabled and upgrades.minor allow the upgrade, then the Operator upgrades each broker in the deployment to use the broker container image that corresponds to the new AMQ Broker version. The broker container image that the Operator uses is defined in an environment variable in the operator.yaml configuration file of the Operator deployment. The environment variable name includes an identifier for the AMQ Broker version. For example, the environment variable RELATED_IMAGE_ActiveMQ_Artemis_Broker_Kubernetes_781 corresponds to AMQ Broker 7.8.1. When the Operator has applied the CR change, it restarts each broker Pod in your deployment so that each Pod uses the specified image version. If you have multiple brokers in your deployment, only one broker Pod shuts down and restarts at a time. Additional resources To learn how the Operator uses environment variables to choose a broker container image, see Section 2.4, "How the Operator chooses container images" . | [
"mkdir ~/broker/operator mv amq-broker-operator-7.8.5-ocp-install-examples.zip ~/broker/operator",
"cd ~/broker/operator unzip amq-broker-operator-7.8.5-ocp-install-examples.zip",
"oc login -u <user>",
"oc project <project-name>",
"oc apply -f deploy/operator.yaml",
"mkdir ~/broker/operator mv amq-broker-operator-7.8.5-ocp-install-examples.zip ~/broker/operator",
"cd ~/broker/operator unzip amq-broker-operator-7.8.5-ocp-install-examples.zip",
"oc login -u <user>",
"oc project <project-name>",
"oc apply -f deploy/operator.yaml",
"mkdir ~/broker/operator mv amq-broker-operator-7.8.5-ocp-install-examples.zip ~/broker/operator",
"cd ~/broker/operator unzip amq-broker-operator-7.8.5-ocp-install-examples.zip",
"oc login -u system:admin",
"oc project <project-name>",
"oc delete -f deploy/crs/broker_activemqartemis_cr.yaml",
"oc apply -f deploy/crds/broker_activemqartemis_crd.yaml",
"oc apply -f deploy/operator.yaml",
"mkdir ~/broker/operator mv amq-broker-operator-7.8.5-ocp-install-examples.zip ~/broker/operator",
"cd ~/broker/operator unzip amq-broker-operator-7.8.5-ocp-install-examples.zip",
"oc login -u system:admin",
"oc project <project-name>",
"oc delete -f deploy/crs/broker_activemqartemis_cr.yaml",
"oc apply -f deploy/crds/broker_activemqartemis_crd.yaml",
"oc apply -f deploy/operator.yaml",
"mkdir ~/broker/operator mv amq-broker-operator-7.8.5-ocp-install-examples.zip ~/broker/operator",
"cd ~/broker/operator unzip amq-broker-operator-7.8.5-ocp-install-examples.zip",
"oc login -u system:admin",
"oc project <project-name>",
"oc delete -f deploy/crs/broker_activemqartemis_cr.yaml",
"oc apply -f deploy/crds/broker_activemqartemis_crd.yaml",
"oc apply -f deploy/crds/broker_activemqartemisaddress_crd.yaml",
"oc apply -f deploy/operator.yaml",
"mkdir ~/broker/operator mv amq-broker-operator-7.8.5-ocp-install-examples.zip ~/broker/operator",
"cd ~/broker/operator unzip amq-broker-operator-7.8.5-ocp-install-examples.zip",
"oc login -u system:admin",
"oc project <project-name>",
"oc delete -f deploy/crs/broker_v2alpha1_activemqartemis_cr.yaml",
"oc apply -f deploy/crds/broker_activemqartemis_crd.yaml",
"oc apply -f deploy/crds/broker_activemqartemisaddress_crd.yaml",
"oc apply -f deploy/operator.yaml",
"oc login -u <user> -p <password> --server= <host:port>",
"spec: version: 7.8.5",
"spec: version: 7.8.5 upgrades:",
"spec: version: 7.8.5 upgrades: enabled: minor:",
"spec: version: 7.8.5 upgrades: enabled: true minor:",
"spec: version: 7.8.5 upgrades: enabled: true minor: true",
"spec: version: 7.8.5 upgrades: enabled: true minor: false",
"oc project <project_name>",
"oc apply -f <path/to/broker_custom_resource_instance> .yaml"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/deploying_amq_broker_on_openshift/assembly_br-upgrading-operator-based-broker-deployments_broker-ocp |
10.4. Satellite Host Provider Hosts | 10.4. Satellite Host Provider Hosts Hosts provided by a Satellite host provider can also be used as virtualization hosts by the Red Hat Virtualization Manager. After a Satellite host provider has been added to the Manager as an external provider, any hosts that it provides can be added to and used in Red Hat Virtualization in the same way as Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/satellite_host_provider_hosts |
Chapter 2. Getting started | Chapter 2. Getting started 2.1. AMQ Streams distribution AMQ Streams is distributed as single ZIP file. This ZIP file contains the following AMQ Streams components: Apache ZooKeeper Apache Kafka Apache Kafka Connect Apache Kafka MirrorMaker Kafka Exporter The Kafka Bridge and Cruise Control components are provided as separate zipped archives. Kafka Bridge Cruise Control 2.2. Downloading an AMQ Streams Archive An archived distribution of AMQ Streams is available for download from the Red Hat website. You can download a copy of the distribution by following the steps below. Procedure Download the latest version of the Red Hat AMQ Streams archive from the Customer Portal . 2.3. Installing AMQ Streams Follow this procedure to install the latest version of AMQ Streams on Red Hat Enterprise Linux. For instructions on upgrading an existing cluster to AMQ Streams 1.7, see AMQ Streams and Kafka upgrades . Prerequisites Download the installation archive . Review the Section 1.3, "Supported Configurations" Procedure Add new kafka user and group. sudo groupadd kafka sudo useradd -g kafka kafka sudo passwd kafka Create directory /opt/kafka . sudo mkdir /opt/kafka Create a temporary directory and extract the contents of the AMQ Streams ZIP file. mkdir /tmp/kafka unzip amq-streams_y.y-x.x.x.zip -d /tmp/kafka Move the extracted contents into /opt/kafka directory and delete the temporary directory. sudo mv /tmp/kafka/ kafka_y.y-x.x.x /* /opt/kafka/ rm -r /tmp/kafka Change the ownership of the /opt/kafka directory to the kafka user. sudo chown -R kafka:kafka /opt/kafka Create directory /var/lib/zookeeper for storing ZooKeeper data and set its ownership to the kafka user. sudo mkdir /var/lib/zookeeper sudo chown -R kafka:kafka /var/lib/zookeeper Create directory /var/lib/kafka for storing Kafka data and set its ownership to the kafka user. sudo mkdir /var/lib/kafka sudo chown -R kafka:kafka /var/lib/kafka 2.4. Data storage considerations An efficient data storage infrastructure is essential to the optimal performance of AMQ Streams. AMQ Streams requires block storage and works well with cloud-based block storage solutions, such as Amazon Elastic Block Store (EBS). The use of file storage is not recommended. Choose local storage when possible. If local storage is not available, you can use a Storage Area Network (SAN) accessed by a protocol such as Fibre Channel or iSCSI. 2.4.1. Apache Kafka and ZooKeeper storage support Use separate disks for Apache Kafka and ZooKeeper. Kafka supports JBOD (Just a Bunch of Disks) storage, a data storage configuration of multiple disks or volumes. JBOD provides increased data storage for Kafka brokers. It can also improve performance. Solid-state drives (SSDs), though not essential, can improve the performance of Kafka in large clusters where data is sent to and received from multiple topics asynchronously. SSDs are particularly effective with ZooKeeper, which requires fast, low latency data access. Note You do not need to provision replicated storage because Kafka and ZooKeeper both have built-in data replication. 2.4.2. File systems It is recommended that you configure your storage system to use the XFS file system. AMQ Streams is also compatible with the ext4 file system, but this might require additional configuration for best results. Additional resources For more information about XFS, see The XFS File System . 2.5. Running a single node AMQ Streams cluster This procedure shows how to run a basic AMQ Streams cluster consisting of a single Apache ZooKeeper node and a single Apache Kafka node, both running on the same host. The default configuration files are used for ZooKeeper and Kafka. Warning A single node AMQ Streams cluster does not provide reliability and high availability and is suitable only for development purposes. Prerequisites AMQ Streams is installed on the host Running the cluster Edit the ZooKeeper configuration file /opt/kafka/config/zookeeper.properties . Set the dataDir option to /var/lib/zookeeper/ : dataDir=/var/lib/zookeeper/ Edit the Kafka configuration file /opt/kafka/config/server.properties . Set the log.dirs option to /var/lib/kafka/ : log.dirs=/var/lib/kafka/ Switch to the kafka user: su - kafka Start ZooKeeper: /opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties Check that ZooKeeper is running: jcmd | grep zookeeper Returns: number org.apache.zookeeper.server.quorum.QuorumPeerMain /opt/kafka/config/zookeeper.properties Start Kafka: /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties Check that Kafka is running: jcmd | grep kafka Returns: number kafka.Kafka /opt/kafka/config/server.properties Additional resources For more information about installing AMQ Streams, see Section 2.3, "Installing AMQ Streams" . For more information about configuring AMQ Streams, see Section 2.8, "Configuring AMQ Streams" . 2.6. Using the cluster This procedure describes how to start the Kafka console producer and consumer clients and use them to send and receive several messages. A new topic is automatically created in step one. Topic auto-creation is controlled using the auto.create.topics.enable configuration property (set to true by default). Alternatively, you can configure and create topics before using the cluster. For more information, see Topics . Prerequisites AMQ Streams is installed on the host ZooKeeper and Kafka are running Procedure Start the Kafka console producer and configure it to send messages to a new topic: /opt/kafka/bin/kafka-console-producer.sh --broker-list <bootstrap-address> --topic <topic-name> For example: /opt/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-topic Enter several messages into the console. Press Enter to send each individual message to your new topic: >message 1 >message 2 >message 3 >message 4 When Kafka creates a new topic automatically, you might receive a warning that the topic does not exist: WARN Error while fetching metadata with correlation id 39 : {4-3-16-topic1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient) The warning should not reappear after you send further messages. In a new terminal window, start the Kafka console consumer and configure it to read messages from the beginning of your new topic. /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server <bootstrap-address> --topic <topic-name> --from-beginning For example: /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my-topic --from-beginning The incoming messages display in the consumer console. Switch to the producer console and send additional messages. Check that they display in the consumer console. Stop the Kafka console producer and then the consumer by pressing Ctrl+C . 2.7. Stopping the AMQ Streams services You can stop the Kafka and ZooKeeper services by running a script. All connections to the Kafka and ZooKeeper services will be terminated. Prerequisites AMQ Streams is installed on the host ZooKeeper and Kafka are up and running Procedure Stop the Kafka broker. su - kafka /opt/kafka/bin/kafka-server-stop.sh Confirm that the Kafka broker is stopped. jcmd | grep kafka Stop ZooKeeper. su - kafka /opt/kafka/bin/zookeeper-server-stop.sh 2.8. Configuring AMQ Streams Prerequisites AMQ Streams is downloaded and installed on the host Procedure Open ZooKeeper and Kafka broker configuration files in a text editor. The configuration files are located at : ZooKeeper /opt/kafka/config/zookeeper.properties Kafka /opt/kafka/config/server.properties Edit the configuration options. The configuration files are in the Java properties format. Every configuration option should be on separate line in the following format: Lines starting with # or ! will be treated as comments and will be ignored by AMQ Streams components. Values can be split into multiple lines by using \ directly before the newline / carriage return. Save the changes Restart the ZooKeeper or Kafka broker Repeat this procedure on all the nodes of the cluster. | [
"sudo groupadd kafka sudo useradd -g kafka kafka sudo passwd kafka",
"sudo mkdir /opt/kafka",
"mkdir /tmp/kafka unzip amq-streams_y.y-x.x.x.zip -d /tmp/kafka",
"sudo mv /tmp/kafka/ kafka_y.y-x.x.x /* /opt/kafka/ rm -r /tmp/kafka",
"sudo chown -R kafka:kafka /opt/kafka",
"sudo mkdir /var/lib/zookeeper sudo chown -R kafka:kafka /var/lib/zookeeper",
"sudo mkdir /var/lib/kafka sudo chown -R kafka:kafka /var/lib/kafka",
"dataDir=/var/lib/zookeeper/",
"log.dirs=/var/lib/kafka/",
"su - kafka",
"/opt/kafka/bin/zookeeper-server-start.sh -daemon /opt/kafka/config/zookeeper.properties",
"jcmd | grep zookeeper",
"number org.apache.zookeeper.server.quorum.QuorumPeerMain /opt/kafka/config/zookeeper.properties",
"/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties",
"jcmd | grep kafka",
"number kafka.Kafka /opt/kafka/config/server.properties",
"/opt/kafka/bin/kafka-console-producer.sh --broker-list <bootstrap-address> --topic <topic-name>",
"/opt/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-topic",
">message 1 >message 2 >message 3 >message 4",
"WARN Error while fetching metadata with correlation id 39 : {4-3-16-topic1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)",
"/opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server <bootstrap-address> --topic <topic-name> --from-beginning",
"/opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my-topic --from-beginning",
"su - kafka /opt/kafka/bin/kafka-server-stop.sh",
"jcmd | grep kafka",
"su - kafka /opt/kafka/bin/zookeeper-server-stop.sh",
"<option> = <value>",
"This is a comment",
"sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username=\"bob\" password=\"bobs-password\";"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/using_amq_streams_on_rhel/assembly-getting-started-str |
Chapter 3. Registering Hosts and Setting Up Host Integration | Chapter 3. Registering Hosts and Setting Up Host Integration You must register hosts that have not been provisioned through Satellite to be able to manage them with Satellite. You can register hosts through Satellite Server or Capsule Server. Note that the entitlement-based subscription model is deprecated and will be removed in a future release. Red Hat recommends that you use the access-based subscription model of Simple Content Access instead. You must also install and configure tools on your hosts, depending on which integration features you want to use. Use the following procedures to install and configure host tools: Section 3.5, "Installing the Katello Agent" Section 3.6, "Installing Tracer" Section 3.7, "Installing and Configuring Puppet Agent on a Host Manually" 3.1. Supported Clients in Registration Satellite supports the following operating systems and architectures for registration. Supported Host Operating Systems The hosts can use the following operating systems: Red Hat Enterprise Linux 9, 8, 7 Red Hat Enterprise Linux 6 with the ELS Add-On Supported Host Architectures The hosts can use the following architectures: i386 x86_64 s390x ppc_64 3.2. Registration Methods You can use the following methods to register hosts to Satellite: Global registration You generate a curl command from Satellite and run this command from an unlimited number of hosts to register them using provisioning templates over the Satellite API. For more information, see Section 3.3, "Registering Hosts by Using Global Registration" . By using this method, you can also deploy Satellite SSH keys to hosts during registration to Satellite to enable hosts for remote execution jobs. For more information, see Chapter 12, Configuring and Setting Up Remote Jobs . By using this method, you can also configure hosts with Red Hat Insights during registration to Satellite. For more information, see Section 9.1, "Using Red Hat Insights with Hosts in Satellite" . (Deprecated) Katello CA Consumer You download and install the consumer RPM from satellite.example.com /pub/katello-ca-consumer-latest.noarch.rpm on the host and then run subscription-manager . (Deprecated) Bootstrap script You download the bootstrap script from satellite.example.com /pub/bootstrap.py on the host and then run the script. For more information, see Section 3.4, "Registering Hosts by Using The Bootstrap Script" . 3.3. Registering Hosts by Using Global Registration You can register a host to Satellite by generating a curl command on Satellite and running this command on hosts. This method uses two provisioning templates: Global Registration template and Linux host_init_config default template. That gives you complete control over the host registration process. You can also customize the default templates if you need greater flexibility. For more information, see Section 3.3.3, "Customizing the Registration Templates" . 3.3.1. Global Parameters for Registration You can configure the following global parameters by navigating to Configure > Global Parameters : The host_registration_insights parameter is used in the insights snippet. If the parameter is set to true , the registration installs and enables the Red Hat Insights client on the host. If the parameter is set to false , it prevents Satellite and the Red Hat Insights client from uploading Inventory reports to your Red Hat Hybrid Cloud Console. The default value is true . When overriding the parameter value, set the parameter type to boolean . The host_packages parameter is for installing packages on the host. The host_registration_remote_execution parameter is used in the remote_execution_ssh_keys snippet. If it is set to true , the registration enables remote execution on the host. The default value is true . The remote_execution_ssh_keys , remote_execution_ssh_user , remote_execution_create_user , and remote_execution_effective_user_method parameters are used in the remote_execution_ssh_keys snippet. For more details, see the snippet. You can navigate to snippets in the Satellite web UI through Hosts > Templates > Provisioning Templates . 3.3.2. Registering a Host You can register a host by using registration templates and set up various integration features and host tools during the registration process. Prerequisites Your user account has a role assigned that has the create_hosts permission. You must have root privileges on the host that you want to register. Satellite Server, any Capsule Servers, and all hosts must be synchronized with the same NTP server, and have a time synchronization tool enabled and running. An activation key must be available for the host. For more information, see Managing Activation Keys in Managing Content . If you want to use Capsule Servers instead of your Satellite Server, ensure that you have configured your Capsule Servers accordingly. For more information, see Configuring Capsule for Host Registration and Provisioning in Installing Capsule Server . If your Satellite Server or Capsule Server is behind an HTTP proxy, configure the Subscription Manager on your host to use the HTTP proxy for connection. For more information, see How to access Red Hat Subscription Manager (RHSM) through a firewall or proxy in the Red Hat Knowledgebase . Procedure In the Satellite web UI, navigate to Hosts > Register Host . Optional: Select a different Organization . Optional: Select a different Location . Optional: From the Host Group list, select the host group to associate the hosts with. Fields that inherit value from Host group : Operating system , Activation Keys and Lifecycle environment . Optional: From the Operating system list, select the operating system of hosts that you want to register. Optional: From the Capsule list, select the Capsule to register hosts through. Optional: Select the Insecure option, if you want to make the first call insecure. During this first call, hosts download the CA file from Satellite. Hosts will use this CA file to connect to Satellite with all future calls making them secure. Red Hat recommends that you avoid insecure calls. If an attacker, located in the network between Satellite and a host, fetches the CA file from the first insecure call, the attacker will be able to access the content of the API calls to and from the registered host and the JSON Web Tokens (JWT). Therefore, if you have chosen to deploy SSH keys during registration, the attacker will be able to access the host using the SSH key. Instead, you can manually copy and install the CA file on each host before registering the host. To do this, find where Satellite stores the CA file by navigating to Administer > Settings > Authentication and locating the value of the SSL CA file setting. Copy the CA file to the /etc/pki/ca-trust/source/anchors/ directory on hosts and enter the following commands: Then register the hosts with a secure curl command, such as: The following is an example of the curl command with the --insecure option: Select the Advanced tab. From the Setup REX list, select whether you want to deploy Satellite SSH keys to hosts or not. If set to Yes , public SSH keys will be installed on the registered host. The inherited value is based on the host_registration_remote_execution parameter. It can be inherited, for example from a host group, an operating system, or an organization. When overridden, the selected value will be stored on host parameter level. From the Setup Insights list, select whether you want to install insights-client and register the hosts to Insights. The Insights tool is available for Red Hat Enterprise Linux only. It has no effect on other operating systems. You must enable the following repositories on a registered machine: RHEL 6: rhel-6-server-rpms RHEL 7: rhel-7-server-rpms RHEL 8: rhel-8-for-x86_64-appstream-rpms The insights-client package is installed by default on RHEL 8 except in environments whereby RHEL 8 was deployed with "Minimal Install" option. Optional: In the Install packages field, list the packages (separated with spaces) that you want to install on the host upon registration. This can be set by the host_packages parameter. Optional: Select the Update packages option to update all packages on the host upon registration. This can be set by the host_update_packages parameter. Optional: In the Repository field, enter a repository to be added before the registration is performed. For example, it can be useful to make the subscription-manager package available for the purpose of the registration. For Red Hat family distributions, enter the URL of the repository, for example http://rpm.example.com/ . Optional: In the Repository GPG key URL field, specify the public key to verify the signatures of GPG-signed packages. It needs to be specified in the ASCII form with the GPG public key header. Optional: In the Token lifetime (hours) field, change the validity duration of the JSON Web Token (JWT) that Satellite uses for authentication. The duration of this token defines how long the generated curl command works. You can set the duration to 0 - 999 999 hours or unlimited. Note that Satellite applies the permissions of the user who generates the curl command to authorization of hosts. If the user loses or gains additional permissions, the permissions of the JWT change too. Therefore, do not delete, block, or change permissions of the user during the token duration. The scope of the JWTs is limited to the registration endpoints only and cannot be used anywhere else. Optional: In the Remote Execution Interface field, enter the identifier of a network interface that hosts must use for the SSH connection. If you keep this field blank, Satellite uses the default network interface. In the Activation Keys field, enter one or more activation keys to assign to hosts. Optional: Select the Lifecycle environment . Optional: Select the Ignore errors option if you want to ignore subscription manager errors. Optional: Select the Force option if you want to remove any katello-ca-consumer rpms before registration and run subscription-manager with the --force argument. Click the Generate button. Copy the generated curl command. On the host that you want to register, run the curl command as root . 3.3.3. Customizing the Registration Templates You can customize the registration process by editing the provisioning templates. Note that all default templates in Satellite are locked. If you want to customize the registration templates, you must clone the default templates and edit the clones. Note Red Hat only provides support for the original unedited templates. Customized templates do not receive updates released by Red Hat. The registration process uses the following provisioning templates: The Global Registration template contains steps for registering hosts to Satellite. This template renders when hosts access the /register Satellite API endpoint. The Linux host_init_config default template contains steps for initial configuration of hosts after they are registered. Procedure Navigate to Hosts > Templates > Provisioning Templates . Search for the template you want to edit. In the row of the required template, click Clone . Edit the template as needed. For more information, see Appendix A, Template Writing Reference . Click Submit . Navigate to Administer > Settings > Provisioning . Change the following settings as needed: Point the Default Global registration template setting to your custom global registration template, Point the Default 'Host initial configuration' template setting to your custom initial configuration template. 3.4. Registering Hosts by Using The Bootstrap Script Deprecated Use Section 3.3, "Registering Hosts by Using Global Registration" instead. Use the bootstrap script to automate content registration and Puppet configuration. You can use the bootstrap script to register new hosts, or to migrate existing hosts from RHN, SAM, RHSM, or another Red Hat Satellite instance. The katello-client-bootstrap package is installed by default on Satellite Server's base operating system. The bootstrap.py script is installed in the /var/www/html/pub/ directory to make it available to hosts at satellite.example.com /pub/bootstrap.py . The script includes documentation in the /usr/share/doc/katello-client-bootstrap- version /README.md file. To use the bootstrap script, you must install it on the host. As the script is only required once, and only for the root user, you can place it in /root or /usr/local/sbin and remove it after use. This procedure uses /root . Prerequisites You have a Satellite user with the permissions required to run the bootstrap script. The examples in this procedure specify the admin user. If this is not acceptable to your security policy, create a new role with the minimum permissions required and add it to the user that will run the script. For more information, see Section 3.4.1, "Setting Permissions for the Bootstrap Script" . You have an activation key for your hosts with the Satellite Client 6 repository enabled. For information on configuring activation keys, see Managing Activation Keys in the Content Management Guide . You have created a host group. For more information about creating host groups, see Section 2.7, "Creating a Host Group" . Puppet Considerations If a host group is associated with a Puppet environment created inside a Production environment, Puppet fails to retrieve the Puppet CA certificate while registering a host from that host group. To create a suitable Puppet environment to be associated with a host group, follow these steps: Manually create a directory: In the Satellite web UI, navigate to Configure > Environments and click Import environment from . The button name includes the FQDN of the internal or external Capsule. Choose the created directory and click Update . Procedure Log in to the host as the root user. Download the script: Make the script executable: Confirm that the script is executable by viewing the help text: On Red Hat Enterprise Linux 8: On other Red Hat Enterprise Linux versions: Enter the bootstrap command with values suitable for your environment. For the --server option, specify the FQDN of Satellite Server or a Capsule Server. For the --location , --organization , and --hostgroup options, use quoted names, not labels, as arguments to the options. For advanced use cases, see Section 3.4.2, "Advanced Bootstrap Script Configuration" . On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: Enter the password of the Satellite user you specified with the --login option. The script sends notices of progress to stdout . When prompted by the script, approve the host's Puppet certificate. In the Satellite web UI, navigate to Infrastructure > Capsules and find the Satellite or Capsule Server you specified with the --server option. From the list in the Actions column, select Certificates . In the Actions column, click Sign to approve the host's Puppet certificate. Return to the host to see the remainder of the bootstrap process completing. In the Satellite web UI, navigate to Hosts > All hosts and ensure that the host is connected to the correct host group. Optional: After the host registration is complete, remove the script: 3.4.1. Setting Permissions for the Bootstrap Script Use this procedure to configure a Satellite user with the permissions required to run the bootstrap script. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Administer > Users . Select an existing user by clicking the required Username . A new pane opens with tabs to modify information about the selected user. Alternatively, create a new user specifically for the purpose of running this script. Click the Roles tab. Select Edit hosts and Viewer from the Roles list. Important The Edit hosts role allows the user to edit and delete hosts as well as being able to add hosts. If this is not acceptable to your security policy, create a new role with the following permissions and assign it to the user: view_organizations view_locations view_domains view_hostgroups view_hosts view_architectures view_ptables view_operatingsystems create_hosts Click Submit . CLI procedure Create a role with the minimum permissions required by the bootstrap script. This example creates a role with the name Bootstrap : Assign the new role to an existing user: Alternatively, you can create a new user and assign this new role to them. For more information on creating users with Hammer, see Managing Users and Roles in the Administering Red Hat Satellite guide. 3.4.2. Advanced Bootstrap Script Configuration This section has more examples for using the bootstrap script to register or migrate a host. Warning These examples specify the admin Satellite user. If this is not acceptable to your security policy, create a new role with the minimum permissions required by the bootstrap script. For more information, see Section 3.4.1, "Setting Permissions for the Bootstrap Script" . 3.4.2.1. Migrating a Host From One Satellite to Another Satellite Use the script with --force to remove the katello-ca-consumer-* packages from the old Satellite and install the katello-ca-consumer-* packages on the new Satellite. Procedure On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 3.4.2.2. Migrating a Host from Red Hat Network (RHN) or Satellite 5 to Satellite The bootstrap script detects the presence of /etc/syconfig/rhn/systemid and a valid connection to RHN as an indicator that the system is registered to a legacy platform. The script then calls rhn-classic-migrate-to-rhsm to migrate the system from RHN. By default, the script does not delete the system's legacy profile due to auditing reasons. To remove the legacy profile, use --legacy-purge , and use --legacy-login to supply a user account that has appropriate permissions to remove a profile. Enter the user account password when prompted. Procedure On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 3.4.2.3. Registering a Host to Satellite without Puppet By default, the bootstrap script configures the host for content management and configuration management. If you have an existing configuration management system and do not want to install Puppet on the host, use --skip-puppet . Procedure On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 3.4.2.4. Registering a Host to Satellite for Content Management Only To register a system as a content host, and omit the provisioning and configuration management functions, use --skip-foreman . Procedure On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 3.4.2.5. Changing the Method the Bootstrap Script Uses to Download the Consumer RPM By default, the bootstrap script uses HTTP to download the consumer RPM from http://satellite.example.com/pub/katello-ca-consumer-latest.noarch.rpm . In some environments, you might want to allow HTTPS only between the host and Satellite. Use --download-method to change the download method from HTTP to HTTPS. Procedure On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 3.4.2.6. Providing the host's IP address to Satellite On hosts with multiple interfaces or multiple IP addresses on one interface, you might need to override the auto-detection of the IP address and provide a specific IP address to Satellite. Use --ip . Procedure On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 3.4.2.7. Enabling Remote Execution on the Host Use --rex and --rex-user to enable remote execution and add the required SSH keys for the specified user. Procedure On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 3.4.2.8. Creating a Domain for a Host During Registration To create a host record, the DNS domain of a host needs to exist in Satellite prior to running the script. If the domain does not exist, add it using --add-domain . Procedure On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 3.4.2.9. Providing an Alternative FQDN for the Host If the host's host name is not an FQDN, or is not RFC-compliant (containing a character such as an underscore), the script will fail at the host name validation stage. If you cannot update the host to use an FQDN that is accepted by Satellite, you can use the bootstrap script to specify an alternative FQDN. Procedure Set create_new_host_when_facts_are_uploaded and create_new_host_when_report_is_uploaded to false using Hammer: Use --fqdn to specify the FQDN that will be reported to Satellite: On Red Hat Enterprise Linux 8, enter the following command: On Red Hat Enterprise Linux 6 or 7, enter the following command: 3.5. Installing the Katello Agent You can install the Katello agent to remotely update Satellite clients. Note The Katello agent is deprecated and will be removed in a future Satellite version. Migrate your processes to use the remote execution feature to update clients remotely. For more information, see Migrating Hosts from Katello Agent to Remote Execution in Managing Hosts . The katello-agent package depends on the gofer package that provides the goferd service. Prerequisites You have enabled the Satellite Client 6 repository on Satellite Server. For more information, see Enabling the Satellite Client 6 Repository in Installing Satellite Server in a Connected Network Environment . You have synchronized the Satellite Client 6 repository on Satellite Server. For more information, see Synchronizing the Satellite Client 6 Repository in Installing Satellite Server in a Connected Network Environment . You have enabled the Satellite Client 6 repository on the client. Procedure Install the katello-agent package: Start the goferd service: 3.6. Installing Tracer Use this procedure to install Tracer on Red Hat Satellite and access Traces. Tracer displays a list of services and applications that are outdated and need to be restarted. Traces is the output generated by Tracer in the Satellite web UI. Prerequisites The host must be registered to Red Hat Satellite. The Red Hat Satellite Client 6 repository must be enabled and synchronized on Satellite Server, and enabled on the host. Procedure On the content host, install the katello-host-tools-tracer RPM package: Enter the following command: In the Satellite web UI, navigate to Hosts > All hosts , then click the required host name. Click the Traces tab to view Traces. If it is not installed, an Enable Traces button initiates a remote execution job that installs the package. 3.7. Installing and Configuring Puppet Agent on a Host Manually Install and configure the Puppet agent on a host manually. Prerequisites The host must have a Puppet environment assigned to it. The Satellite Client 6 repository must be enabled and synchronized to Satellite Server, and enabled on the host. For more information, see Importing Content in Managing Content . Procedure Log in to the host as the root user. Install the Puppet agent package. On hosts running Red Hat Enterprise Linux 8 and above: On hosts running Red Hat Enterprise Linux 7 and below: Add the Puppet agent to PATH in your current shell using the following script: Configure the Puppet agent. Set the environment parameter to the name of the Puppet environment to which the host belongs: Start the Puppet agent service: Create a certificate for the host: In the Satellite web UI, navigate to Infrastructure > Capsules . From the list in the Actions column for the required Capsule Server, select Certificates . Click Sign to the right of the required host to sign the SSL certificate for the Puppet agent. On the host, run the Puppet agent again: Additional Resources For more information about Puppet, see Managing Configurations Using Puppet Integration in Red Hat Satellite . | [
"update-ca-trust enable update-ca-trust",
"curl -sS https://satellite.example.com/register",
"curl -sS --insecure https://satellite.example.com/register",
"mkdir /etc/puppetlabs/code/environments/ example_environment",
"curl -O http:// satellite.example.com /pub/bootstrap.py",
"chmod +x bootstrap.py",
"/usr/libexec/platform-python bootstrap.py -h",
"./bootstrap.py -h",
"/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location= \"Example Location\" --organization= \"Example Organization\" --hostgroup= \"Example Host Group\" --activationkey= activation_key",
"./bootstrap.py --login= admin --server satellite.example.com --location= \"Example Location\" --organization= \"Example Organization\" --hostgroup= \"Example Host Group\" --activationkey= activation_key",
"rm bootstrap.py",
"ROLE='Bootstrap' hammer role create --name \"USDROLE\" hammer filter create --role \"USDROLE\" --permissions view_organizations hammer filter create --role \"USDROLE\" --permissions view_locations hammer filter create --role \"USDROLE\" --permissions view_domains hammer filter create --role \"USDROLE\" --permissions view_hostgroups hammer filter create --role \"USDROLE\" --permissions view_hosts hammer filter create --role \"USDROLE\" --permissions view_architectures hammer filter create --role \"USDROLE\" --permissions view_ptables hammer filter create --role \"USDROLE\" --permissions view_operatingsystems hammer filter create --role \"USDROLE\" --permissions create_hosts",
"hammer user add-role --id user_id --role Bootstrap",
"/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location= \"Example Location\" --organization= \"Example Organization\" --hostgroup= \"Example Host Group\" --activationkey= activation_key --force",
"bootstrap.py --login= admin --server satellite.example.com --location= \"Example Location\" --organization= \"Example Organization\" --hostgroup= \"Example Host Group\" --activationkey= activation_key --force",
"/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location= \"Example Location\" --organization= \"Example Organization\" --hostgroup= \"Example Host Group\" --activationkey= activation_key --legacy-purge --legacy-login rhn-user",
"bootstrap.py --login= admin --server satellite.example.com --location= \"Example Location\" --organization= \"Example Organization\" --hostgroup= \"Example Host Group\" --activationkey= activation_key --legacy-purge --legacy-login rhn-user",
"/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location= \"Example Location\" --organization= \"Example Organization\" --hostgroup= \"Example Host Group\" --activationkey= activation_key --skip-puppet",
"bootstrap.py --login= admin --server satellite.example.com --location= \"Example Location\" --organization= \"Example Organization\" --hostgroup= \"Example Host Group\" --activationkey= activation_key --skip-puppet",
"/usr/libexec/platform-python bootstrap.py --server satellite.example.com --organization= \"Example Organization\" --activationkey= activation_key --skip-foreman",
"bootstrap.py --server satellite.example.com --organization= \"Example Organization\" --activationkey= activation_key --skip-foreman",
"/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location= \"Example Location\" --organization= \"Example Organization\" --hostgroup= \"Example Host Group\" --activationkey= activation_key --download-method https",
"bootstrap.py --login= admin --server satellite.example.com --location= \"Example Location\" --organization= \"Example Organization\" --hostgroup= \"Example Host Group\" --activationkey= activation_key --download-method https",
"/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location= \"Example Location\" --organization= \"Example Organization\" --hostgroup= \"Example Host Group\" --activationkey= activation_key --ip 192.x.x.x",
"bootstrap.py --login= admin --server satellite.example.com --location= \"Example Location\" --organization= \"Example Organization\" --hostgroup= \"Example Host Group\" --activationkey= activation_key --ip 192.x.x.x",
"/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location= \"Example Location\" --organization= \"Example Organization\" --hostgroup= \"Example Host Group\" --activationkey= activation_key --rex --rex-user root",
"bootstrap.py --login= admin --server satellite.example.com --location= \"Example Location\" --organization= \"Example Organization\" --hostgroup= \"Example Host Group\" --activationkey= activation_key --rex --rex-user root",
"/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location= \"Example Location\" --organization= \"Example Organization\" --hostgroup= \"Example Host Group\" --activationkey= activation_key --add-domain",
"bootstrap.py --login= admin --server satellite.example.com --location= \"Example Location\" --organization= \"Example Organization\" --hostgroup= \"Example Host Group\" --activationkey= activation_key --add-domain",
"hammer settings set --name create_new_host_when_facts_are_uploaded --value false hammer settings set --name create_new_host_when_report_is_uploaded --value false",
"/usr/libexec/platform-python bootstrap.py --login= admin --server satellite.example.com --location= \"Example Location\" --organization= \"Example Organization\" --hostgroup= \"Example Host Group\" --activationkey= activation_key --fqdn node100.example.com",
"bootstrap.py --login= admin --server satellite.example.com --location= \"Example Location\" --organization= \"Example Organization\" --hostgroup= \"Example Host Group\" --activationkey= activation_key --fqdn node100.example.com",
"yum install katello-agent",
"systemctl start goferd",
"yum install katello-host-tools-tracer",
"katello-tracer-upload",
"dnf install puppet-agent",
"yum install puppet-agent",
". /etc/profile.d/puppet-agent.sh",
"puppet config set server satellite.example.com --section agent puppet config set environment My_Puppet_Environment --section agent",
"puppet resource service puppet ensure=running enable=true",
"puppet ssl bootstrap",
"puppet ssl bootstrap"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_hosts/registering_hosts_to_server_managing-hosts |
14.2. Differences Between the CarMart and Transactional Quickstarts | 14.2. Differences Between the CarMart and Transactional Quickstarts Despite the similarity in steps to build, deploy and remove the transactional and non-transactional CarMart quickstarts, some differences must be noted. The following is a list of such differences: CarMart is available for both Remote Client-Server Mode and Library Mode. Transactional CarMart is only available in Library Mode because transactions are not available in Remote Client-Server Mode. The Transactional Quickstart also displays how a transaction rollback occurs. Use the Add car with rollback button to view the rollback. The CarMart example has a simple Add car button instead. 23154%2C+Getting+Started+Guide-6.608-09-2016+09%3A22%3A31JBoss+Data+Grid+6Documentation6.6.1 Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/differences_between_the_carmart_and_transactional_quickstarts |
Chapter 8. Checking for Local Storage Operator deployments | Chapter 8. Checking for Local Storage Operator deployments Red Hat OpenShift Data Foundation clusters with Local Storage Operator are deployed using local storage devices. To find out if your existing cluster with OpenShift Data Foundation was deployed using local storage devices, use the following procedure: Prerequisites OpenShift Data Foundation is installed and running in the openshift-storage namespace. Procedure By checking the storage class associated with your OpenShift Data Foundation cluster's persistent volume claims (PVCs), you can tell if your cluster was deployed using local storage devices. Check the storage class associated with OpenShift Data Foundation cluster's PVCs with the following command: Check the output. For clusters with Local Storage Operators, the PVCs associated with ocs-deviceset use the storage class localblock . The output looks similar to the following: Additional Resources Deploying OpenShift Data Foundation using local storage devices on VMware Deploying OpenShift Data Foundation using local storage devices on Red Hat Virtualization Deploying OpenShift Data Foundation using local storage devices on bare metal Deploying OpenShift Data Foundation using local storage devices on IBM Power | [
"oc get pvc -n openshift-storage",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE db-noobaa-db-0 Bound pvc-d96c747b-2ab5-47e2-b07e-1079623748d8 50Gi RWO ocs-storagecluster-ceph-rbd 114s ocs-deviceset-0-0-lzfrd Bound local-pv-7e70c77c 1769Gi RWO localblock 2m10s ocs-deviceset-1-0-7rggl Bound local-pv-b19b3d48 1769Gi RWO localblock 2m10s ocs-deviceset-2-0-znhk8 Bound local-pv-e9f22cdc 1769Gi RWO localblock 2m10s"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/troubleshooting_openshift_data_foundation/checking-for-local-storage-operator-deployments_rhodf |
Chapter 11. Searching IdM entries using the ldapsearch command | Chapter 11. Searching IdM entries using the ldapsearch command You can use the ipa find command to search through the Identity Management entries. For more information about ipa command see Structure of IPA commands section. This section introduces the basics of an alternative search option using ldapsearch command line command through the Identity Management entries. 11.1. Using the ldapsearch command The ldapsearch command has the following format: To configure the authentication method, specify the -x option to use simple binds or the -Y option to set the Simple Authentication and Security Layer (SASL) mechanism. Note that you need to obtain a Kerberos ticket if you are using the -Y GSSAPI option. The options are the ldapsearch command options described in a table below. The search_filter is an LDAP search filter. The list_of_attributes is a list of the attributes that the search results return. For example, you want to search all the entries of a base LDAP tree for the user name user01 : The -x option tells the ldapsearch command to authenticate with the simple bind. Note that if you do not provide the Distinguish Name (DN) with the -D option, the authentication is anonymous. The -H option connects you to the ldap://ldap.example.com . The -s sub option tells the ldapsearch command to search all the entries, starting from the base DN, for the user with the name user01 . The "(uid=user01)" is a filter. Note that if you do not provide the starting point for the search with the -b option, the command searches in the default tree. It is specified in the BASE parameter of the etc/openldap/ldap.conf file. Table 11.1. The ldapsearch command options Option Description -b The starting point for the search. If your search parameters contain an asterisk (*) or other character, that the command line can interpret into a code, you must wrap the value in single or double quotation marks. For example, -b cn=user,ou=Product Development,dc=example,dc=com . -D The Distinguished Name (DN) with which you want to authenticate. -H An LDAP URL to connect to the server. The -H option replaces the -h and -p options. -l The time limit in seconds to wait for a search request to complete. -s scope The scope of the search. You can choose one of the following for the scope: base searches only the entry from the -b option or defined by the LDAP_BASEDN environment variable. one searches only the children of the entry from the -b option. sub a subtree search from the -b option starting point. -W Requests for the password. -x Disables the default SASL connection to allow simple binds. -Y SASL_mechanism Sets the SASL mechanism for the authentication. -z number The maximum number of entries in the search result. Note, you must specify one of the authentication mechanisms with the -x or -Y option with the ldapsearch command. Additional resources For details on how to use ldapsearch , see ldapsearch(1) man page on your system. 11.2. Using the ldapsearch filters The ldapsearch filters allow you to narrow down the search results. For example, you want the search result to contain all the entries with a common names set to example : In this case, the equal sign (=) is the operator, and example is the value. Table 11.2. The ldapsearch filter operators Search type Operator Description Equality = Returns the entries with the exact match to the value. For example, cn=example . Substring =string* string Returns all entries with the substring match. For example, cn=exa*l . The asterisk (*) indicates zero (0) or more characters. Greater than or equal to >= Returns all entries with attributes that are greater than or equal to the value. For example, uidNumber >= 5000 . Less than or equal to <= Returns all entries with attributes that are less than or equal to the value. For example, uidNumber <= 5000 . Presence =* Returns all entries with one or more attributes. For example, cn=* . Approximate ~= Returns all entries with the similar to the value attributes. For example, l~=san fransico can return l=san francisco . You can use boolean operators to combine multiple filters to the ldapsearch command. Table 11.3. The ldapsearch filter boolean operators Search type Operator Description AND & Returns all entries where all statements in the filters are true. For example, (&(filter)(filter)(filter)... ) . OR | Returns all entries where at least one statement in the filters is true. For example, (|(filter)(filter)(filter)... ) . NOT ! Returns all entries where the statement in the filter is not true. For example, (!(filter)) . | [
"ldapsearch [-x | -Y mechanism] [options] [search_filter] [list_of_attributes]",
"ldapsearch -x -H ldap://ldap.example.com -s sub \"(uid=user01)\"",
"\"(cn=example)\""
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/assembly_searching-idm-entries_managing-users-groups-hosts |
Chapter 4. An active/active Samba Server in a Red Hat High Availability Cluster (Red Hat Enterprise Linux 7.4 and Later) | Chapter 4. An active/active Samba Server in a Red Hat High Availability Cluster (Red Hat Enterprise Linux 7.4 and Later) As of the Red Hat Enterprise Linux 7.4 release, the Red Hat Resilient Storage Add-On provides support for running Samba in an active/active cluster configuration using Pacemaker. The Red Hat Resilient Storage Add-On includes the High Availability Add-On. Note For further information on support policies for Samba, see Support Policies for RHEL Resilient Storage - ctdb General Policies and Support Policies for RHEL Resilient Storage - Exporting gfs2 contents via other protocols on the Red Hat Customer Portal. This chapter describes how to configure a highly available active/active Samba server on a two-node Red Hat Enterprise Linux High Availability Add-On cluster using shared storage. The procedure uses pcs to configure Pacemaker cluster resources. This use case requires that your system include the following components: Two nodes, which will be used to create the cluster running Clustered Samba. In this example, the nodes used are z1.example.com and z2.example.com which have IP address of 192.168.1.151 and 192.168.1.152 . A power fencing device for each node of the cluster. This example uses two ports of the APC power switch with a host name of zapc.example.com . Shared storage for the nodes in the cluster, using iSCSI or Fibre Channel. Configuring a highly available active/active Samba server on a two-node Red Hat Enterprise Linux High Availability Add-On cluster requires that you perform the following steps. Create the cluster that will export the Samba shares and configure fencing for each node in the cluster, as described in Section 4.1, "Creating the Cluster" . Configure a gfs2 file system mounted on the clustered LVM logical volume my_clv on the shared storage for the nodes in the cluster, as described in Section 4.2, "Configuring a Clustered LVM Volume with a GFS2 File System" . Configure Samba on each node in the cluster, Section 4.3, "Configuring Samba" . Create the Samba cluster resources as described in Section 4.4, "Configuring the Samba Cluster Resources" . Test the Samba share you have configured, as described in Section 4.5, "Testing the Resource Configuration" . 4.1. Creating the Cluster Use the following procedure to install and create the cluster to use for the Samba service: Install the cluster software on nodes z1.example.com and z2.example.com , using the procedure provided in Section 1.1, "Cluster Software Installation" . Create the two-node cluster that consists of z1.example.com and z2.example.com , using the procedure provided in Section 1.2, "Cluster Creation" . As in that example procedure, this use case names the cluster my_cluster . Configure fencing devices for each node of the cluster, using the procedure provided in Section 1.3, "Fencing Configuration" . This example configures fencing using two ports of the APC power switch with a host name of zapc.example.com . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_administration/ch-hasamba-HAAA |
Chapter 3. Tutorial: ROSA with HCP private offer acceptance and sharing | Chapter 3. Tutorial: ROSA with HCP private offer acceptance and sharing This guide describes how to accept a private offer for Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) and how to ensure that all team members can use the private offer for the clusters they provision. ROSA with HCP costs are composed of the AWS infrastructure costs and the ROSA with HCP service costs. AWS infrastructure costs, such as the EC2 instances that are running the needed workloads, are charged to the AWS account where the infrastructure is deployed. ROSA service costs are charged to the AWS account specified as the "AWS billing account" when deploying a cluster. The cost components can be billed to different AWS accounts. Detailed description of how the ROSA service cost and AWS infrastructure costs are calculated can be found on the Red Hat OpenShift Service on AWS Pricing page . 3.1. Accepting a private offer When you get a private offer for ROSA with HCP, you are provided with a unique URL that is accessible only by a specific AWS account ID that was specified by the seller. Note Verify that you are logged in using the AWS account that was specified as the buyer. Attempting to access the offer using another AWS account produces a "page not found" error message as shown in Figure 11 in the troubleshooting section below. You can see the offer selection drop down menu with a regular private offer pre-selected in Figure 1. This type of offer can be accepted only if the ROSA with HCP was not activated before using the public offer or another private offer. Figure 3.1. Regular private offer You can see a private offer that was created for an AWS account that previously activated ROSA with HCP using the public offer, showing the product name and the selected private offer labeled as "Upgrade", that replaces the currently running contract for ROSA with HCP in Figure 2. Figure 3.2. Private offer selection selection screen The drop down menu allows selecting between multiple offers, if available. The previously activated public offer is shown together with the newly provided agreement based offer that is labeled as "Upgrade" in Figure 3. Figure 3.3. Private offer selection dropdown Verify that your offer configuration is selected. Figure 4 shows the bottom part of the offer page with the offer details. Note The contract end date, the number of units included with the offer, and the payment schedule. In this example, 1 cluster and up to 3 nodes utilizing 4 vCPUs are included. Figure 3.4. Private offer details Optional: you can add your own purchase order (PO) number to the subscription that is being purchased, so it is included on your subsequent AWS invoices. Also, check the "Additional usage fees" that are charged for any usage above the scope of the "New offer configuration details". Note Private offers have several available configurations. It is possible that the private offer you are accepting is set up with a fixed future start date. If you do not have another active ROSA with HCP subscription at the time of accepting the private offer, a public offer or an older private offer entitlement, accept the private offer itself and continue with the account linking and cluster deployment steps after the specified service start date. You must have an active ROSA with HCP entitlement to complete these steps. Service start dates are always reported in the UTC time zone Create or upgrade your contract. For private offers accepted by an AWS account that does not have ROSA with HCP activated yet and is creating the first contract for this service, click the Create contract button . Figure 3.5. Create contract button For agreement-based offers, click the Upgrade current contract button shown in Figures 4 and 6. Figure 3.6. Upgrade contract button Click Confirm . Figure 3.7. Private offer acceptance confirmation window If the accepted private offer service start date is set to be immediately following the offer acceptance, click the Set up your account button in the confirmation modal window. Figure 3.8. Subscription confirmation If the accepted private offer has a future start date specified, return to the private offer page after the service start date, and click the Setup your account button to proceed with the Red Hat and AWS account linking. Note With no agreement active, the account linking described below is not triggered, the "Account setup" process can be done only after the "Service start date". These are always in UTC time zone. 3.2. Sharing a private offer Clicking the Set up your account button in the step takes you to the AWS and Red Hat account linking step. At this time, you are already logged in with the AWS account that accepted the offer. If you are not logged in with a Red Hat account, you will be prompted to do so. ROSA with HCP entitlement is shared with other team members through your Red Hat organization account. All existing users in the same Red Hat organization are able to select the billing AWS account that accepted the private offer by following the above described steps. You can manage users in your Red Hat organization , when logged in as the Red Hat organization administrator, and invite or create new users. Note ROSA with HCP private offer cannot be shared with AWS linked accounts through the AWS License Manager. Add any users that you want to deploy ROSA clusters. Check this user management FAQ for more details about Red Hat account user management tasks. Verify that the already logged in Red Hat account includes all users that are meant to be ROSA cluster deployers benefiting from the accepted private offer. Verify that the Red Hat account number and the AWS account ID are the desired accounts that are to be linked. This linking is unique and a Red Hat account can be connected only with a single AWS (billing) account. Figure 3.9. AWS and Red Hat accounts connection If you want to link the AWS account with another Red Hat account than is shown on this page in Figure 9, log out from the Red Hat Hybrid Cloud Console before connecting the accounts and repeat the step of setting the account by returning to the private offer URL that is already accepted. An AWS account can be connected with a single Red Hat account only. Once Red Hat and AWS accounts are connected, this cannot be changed by the user. If a change is needed, the user must create a support ticket. Agree to the terms and conditions and then click Connect accounts . 3.3. AWS billing account selection When deploying ROSA with HCP clusters, verify that end users select the AWS billing account that accepted the private offer. When using the web interface for deploying ROSA with HCP, the Associated AWS infrastructure account" is typically set to the AWS account ID used by the administrator of the cluster that is being created. This can be the same AWS account as the billing AWS account. AWS resources are deployed into this account and all the billing associated with those resources are processed accordingly. Figure 3.10. Infrastructure and billing AWS account selection during ROSA with HCP cluster deployment The drop-down for the AWS billing account on the screenshot above should be set to the AWS account that accepted the private offer, providing the purchased quota is intended to be used by the cluster that is being created. If different AWS accounts are selected in the infrastructure and billing "roles", the blue informative note visible in Figure 10 is shown. 3.4. Troubleshooting The most frequent issues associated with private offer acceptance and Red Hat account linking. 3.4.1. Accessing a private offer using a different AWS account If you try accessing the private offer when logged in under AWS account ID that is not defined in the offer, and see the message shown in Figure 11, then verify that you are logged in as the desired AWS billing account. Figure 3.11. HTTP 404 error when using the private offer URL Contact the seller if you need the private offer to be extended to another AWS account. 3.4.2. The private offer cannot be accepted because of active subscription If you try accessing a private offer that was created for the first time ROSA with HCP activation, while you already have ROSA with HCP activated using another public or private offer, and see the following notice, then contact the seller who provided you with the offer. The seller can provide you with a new offer that will seamlessly replace your current agreement, without a need to cancel your subscription. Figure 3.12. Existing subscription preventing private offer acceptance 3.4.3. The AWS account is already linked to a different Red Hat account If you see the error message "AWS account is already linked to a different Red Hat account" when you try to connect the AWS account that accepted the private offer with a presently logged-in Red Hat user, then the AWS account is already connected to another Red Hat user. Figure 3.13. AWS account is already linked to a different Red Hat account You can either log in using another Red Hat account or another AWS account. However, since this guide pertains to private offers, the assumption is that you are logged in with the AWS account that was specified as the buyer and already accepted the private offer so it is intended to be used as the billing account. Logging in as another AWS account is not expected after a private offer was accepted. You can still log in with another Red Hat user which is already connected to the AWS account that accepted the private offer. Other Red Hat users belonging to the same Red Hat organization are able to use the linked AWS account as the ROSA with HCP AWS billing account when creating clusters as seen in Figure 10. If you believe that the existing account linking might not be correct, see the "My team members belong to different Red Hat organizations" question below for tips on how you can proceed. 3.4.4. My team members belong to different Red Hat organizations An AWS account can be connected to a single Red Hat account only. Any user that wants to create a cluster and benefit from the private offer granted to this AWS account needs to be in the same Red Hat account. This can be achieved by inviting the user to the same Red Hat account and creating a new Red Hat user. 3.4.5. Incorrect AWS billing account was selected when creating a cluster If the user selected an incorrect AWS billing account, the fastest way to fix this is to delete the cluster and create a new one, while selecting the correct AWS billing account. If this is a production cluster that cannot be easily deleted, please contact Red Hat support to change the billing account for an existing cluster. Expect some turnaround time for this to be resolved. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/tutorials/rosa-with-hcp-private-offer-acceptance-and-sharing |
Installing Debezium on OpenShift | Installing Debezium on OpenShift Red Hat Integration 2023.q4 For use with Red Hat Integration 2.3.4 on OpenShift Container Platform Red Hat Integration Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/installing_debezium_on_openshift/index |
Chapter 5. Requesting, Enrolling, and Managing Certificates | Chapter 5. Requesting, Enrolling, and Managing Certificates Certificates are requested and used by end users. Although certificate enrollment and renewal are operations that are not limited to administrators, understanding the enrollment and renewal processes can make it easier for administrators to manage and create appropriate certificate profiles, as described in Section 3.2, "Setting up Certificate Profiles" , and to use fitting authentication methods (described in Chapter 10, Authentication for Enrolling Certificates ) for each certificate type. This chapter discusses requesting, receiving, and renewing certificates for use outside Certificate System. For information on requesting and renewing Certificate System subsystem certificates, see Chapter 17, Managing Subsystem Certificates . 5.1. About Enrolling and Renewing Certificates Enrollment is the process for requesting and receiving a certificate. The mechanics for the enrollment process are slightly different depending on the type of certificate, the method for generating its key pair, and the method for generating and approving the certificate itself. Whatever the specific method, certificate enrollment, at a high level, has the same basic steps: A certificate request (CSR) is generated. The certificate request is submitted to the CA. The request is verified by authenticating the entity which requested it and by confirming that the request meets the certificate profile rules which were used to submit it. The request is approved. The requesting party retrieves the new certificate. When the certificate reaches the end of its validity period, it can be renewed. | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/managing_certificates |
14.12.7. Re-sizing Storage Volumes | 14.12.7. Re-sizing Storage Volumes This command command re-sizes the capacity of the given volume, in bytes. The command requires --pool pool-or-uuid which is the name or UUID of the storage pool the volume is in. This command also requires vol-name-or-key-or-path is the name or key or path of the volume to re-size. The new capacity may create a sparse file unless the --allocate option is specified. Normally, capacity is the new size, but if --delta is present, then it is added to the existing size. Attempts to shrink the volume will fail unless the --shrink option is present. Note that capacity cannot be negative unless the --shrink option is provided and a negative sign is not necessary. capacity is a scaled integer which defaults to bytes if there is no suffix. Note too that this command is only safe for storage volumes not in use by an active guest. Refer to Section 14.5.17, "Using blockresize to Change the Size of a Domain Path" for live re-sizing. | [
"vol-resize --pool pool-or-uuid vol-name-or-path pool-or-uuid capacity --allocate --delta --shrink"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-storage_volume_commands-re_sizing_storage_volumes |
Chapter 7. Managing Red Hat High Availability Add-On With ccs | Chapter 7. Managing Red Hat High Availability Add-On With ccs This chapter describes various administrative tasks for managing the Red Hat High Availability Add-On by means of the ccs command, which is supported as of the Red Hat Enterprise Linux 6.1 release and later. This chapter consists of the following sections: Section 7.1, "Managing Cluster Nodes" Section 7.2, "Starting and Stopping a Cluster" Section 7.3, "Diagnosing and Correcting Problems in a Cluster" 7.1. Managing Cluster Nodes This section documents how to perform the following node-management functions with the ccs command: Section 7.1.1, "Causing a Node to Leave or Join a Cluster" Section 7.1.2, "Adding a Member to a Running Cluster" 7.1.1. Causing a Node to Leave or Join a Cluster You can use the ccs command to cause a node to leave a cluster by stopping cluster services on that node. Causing a node to leave a cluster does not remove the cluster configuration information from that node. Making a node leave a cluster prevents the node from automatically joining the cluster when it is rebooted. To cause a node to leave a cluster, execute the following command, which stops cluster services on the node specified with the -h option: When you stop cluster services on a node, any service that is running on that node will fail over. To delete a node entirely from the cluster configuration, use the --rmnode option of the ccs command, as described in Section 6.4, "Creating and Modifying a Cluster" . To cause a node to rejoin a cluster execute the following command, which starts cluster services on the node specified with the -h option: 7.1.2. Adding a Member to a Running Cluster To add a member to a running cluster, add a node to the cluster as described in Section 6.4, "Creating and Modifying a Cluster" . After updating the configuration file, propagate the file to all nodes in the cluster and be sure to activate the new cluster configuration file, as described in Section 6.15, "Propagating the Configuration File to the Cluster Nodes" . Note When you add a node to a cluster that uses UDPU transport, you must restart all nodes in the cluster for the change to take effect. | [
"ccs -h host --stop",
"ccs -h host --start"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/ch-mgmt-ccs-CA |
Chapter 3. PerformanceProfile [performance.openshift.io/v2] | Chapter 3. PerformanceProfile [performance.openshift.io/v2] Description PerformanceProfile is the Schema for the performanceprofiles API Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PerformanceProfileSpec defines the desired state of PerformanceProfile. status object PerformanceProfileStatus defines the observed state of PerformanceProfile. 3.1.1. .spec Description PerformanceProfileSpec defines the desired state of PerformanceProfile. Type object Required cpu nodeSelector Property Type Description additionalKernelArgs array (string) Addional kernel arguments. cpu object CPU defines a set of CPU related parameters. globallyDisableIrqLoadBalancing boolean GloballyDisableIrqLoadBalancing toggles whether IRQ load balancing will be disabled for the Isolated CPU set. When the option is set to "true" it disables IRQs load balancing for the Isolated CPU set. Setting the option to "false" allows the IRQs to be balanced across all CPUs, however the IRQs load balancing can be disabled per pod CPUs when using irq-load-balancing.crio.io/cpu-quota.crio.io annotations. Defaults to "false" hugepages object HugePages defines a set of huge pages related parameters. It is possible to set huge pages with multiple size values at the same time. For example, hugepages can be set with 1G and 2M, both values will be set on the node by the Performance Profile Controller. It is important to notice that setting hugepages default size to 1G will remove all 2M related folders from the node and it will be impossible to configure 2M hugepages under the node. machineConfigLabel object (string) MachineConfigLabel defines the label to add to the MachineConfigs the operator creates. It has to be used in the MachineConfigSelector of the MachineConfigPool which targets this performance profile. Defaults to "machineconfiguration.openshift.io/role=<same role as in NodeSelector label key>" machineConfigPoolSelector object (string) MachineConfigPoolSelector defines the MachineConfigPool label to use in the MachineConfigPoolSelector of resources like KubeletConfigs created by the operator. Defaults to "machineconfiguration.openshift.io/role=<same role as in NodeSelector label key>" net object Net defines a set of network related features nodeSelector object (string) NodeSelector defines the Node label to use in the NodeSelectors of resources like Tuned created by the operator. It most likely should, but does not have to match the node label in the NodeSelector of the MachineConfigPool which targets this performance profile. In the case when machineConfigLabels or machineConfigPoolSelector are not set, we are expecting a certain NodeSelector format <domain>/<role>: "" in order to be able to calculate the default values for the former mentioned fields. numa object NUMA defines options related to topology aware affinities realTimeKernel object RealTimeKernel defines a set of real time kernel related parameters. RT kernel won't be installed when not set. workloadHints object WorkloadHints defines hints for different types of workloads. It will allow defining exact set of tuned and kernel arguments that should be applied on top of the node. 3.1.2. .spec.cpu Description CPU defines a set of CPU related parameters. Type object Required isolated reserved Property Type Description balanceIsolated boolean BalanceIsolated toggles whether or not the Isolated CPU set is eligible for load balancing work loads. When this option is set to "false", the Isolated CPU set will be static, meaning workloads have to explicitly assign each thread to a specific cpu in order to work across multiple CPUs. Setting this to "true" allows workloads to be balanced across CPUs. Setting this to "false" offers the most predictable performance for guaranteed workloads, but it offloads the complexity of cpu load balancing to the application. Defaults to "true" isolated string Isolated defines a set of CPUs that will be used to give to application threads the most execution time possible, which means removing as many extraneous tasks off a CPU as possible. It is important to notice the CPU manager can choose any CPU to run the workload except the reserved CPUs. In order to guarantee that your workload will run on the isolated CPU: 1. The union of reserved CPUs and isolated CPUs should include all online CPUs 2. The isolated CPUs field should be the complementary to reserved CPUs field offlined string Offline defines a set of CPUs that will be unused and set offline reserved string Reserved defines a set of CPUs that will not be used for any container workloads initiated by kubelet. 3.1.3. .spec.hugepages Description HugePages defines a set of huge pages related parameters. It is possible to set huge pages with multiple size values at the same time. For example, hugepages can be set with 1G and 2M, both values will be set on the node by the Performance Profile Controller. It is important to notice that setting hugepages default size to 1G will remove all 2M related folders from the node and it will be impossible to configure 2M hugepages under the node. Type object Property Type Description defaultHugepagesSize string DefaultHugePagesSize defines huge pages default size under kernel boot parameters. pages array Pages defines huge pages that we want to allocate at boot time. pages[] object HugePage defines the number of allocated huge pages of the specific size. 3.1.4. .spec.hugepages.pages Description Pages defines huge pages that we want to allocate at boot time. Type array 3.1.5. .spec.hugepages.pages[] Description HugePage defines the number of allocated huge pages of the specific size. Type object Property Type Description count integer Count defines amount of huge pages, maps to the 'hugepages' kernel boot parameter. node integer Node defines the NUMA node where hugepages will be allocated, if not specified, pages will be allocated equally between NUMA nodes size string Size defines huge page size, maps to the 'hugepagesz' kernel boot parameter. 3.1.6. .spec.net Description Net defines a set of network related features Type object Property Type Description devices array Devices contains a list of network device representations that will be set with a netqueue count equal to CPU.Reserved . If no devices are specified then the default is all devices. devices[] object Device defines a way to represent a network device in several options: device name, vendor ID, model ID, PCI path and MAC address userLevelNetworking boolean UserLevelNetworking when enabled - sets either all or specified network devices queue size to the amount of reserved CPUs. Defaults to "false". 3.1.7. .spec.net.devices Description Devices contains a list of network device representations that will be set with a netqueue count equal to CPU.Reserved . If no devices are specified then the default is all devices. Type array 3.1.8. .spec.net.devices[] Description Device defines a way to represent a network device in several options: device name, vendor ID, model ID, PCI path and MAC address Type object Property Type Description deviceID string Network device ID (model) represnted as a 16 bit hexmadecimal number. interfaceName string Network device name to be matched. It uses a syntax of shell-style wildcards which are either positive or negative. vendorID string Network device vendor ID represnted as a 16 bit Hexmadecimal number. 3.1.9. .spec.numa Description NUMA defines options related to topology aware affinities Type object Property Type Description topologyPolicy string Name of the policy applied when TopologyManager is enabled Operator defaults to "best-effort" 3.1.10. .spec.realTimeKernel Description RealTimeKernel defines a set of real time kernel related parameters. RT kernel won't be installed when not set. Type object Property Type Description enabled boolean Enabled defines if the real time kernel packages should be installed. Defaults to "false" 3.1.11. .spec.workloadHints Description WorkloadHints defines hints for different types of workloads. It will allow defining exact set of tuned and kernel arguments that should be applied on top of the node. Type object Property Type Description highPowerConsumption boolean HighPowerConsumption defines if the node should be configured in high power consumption mode. The flag will affect the power consumption but will improve the CPUs latency. perPodPowerManagement boolean PerPodPowerManagement defines if the node should be configured in per pod power management. PerPodPowerManagement and HighPowerConsumption hints can not be enabled together. realTime boolean RealTime defines if the node should be configured for the real time workload. 3.1.12. .status Description PerformanceProfileStatus defines the observed state of PerformanceProfile. Type object Property Type Description conditions array Conditions represents the latest available observations of current state. conditions[] object Condition represents the state of the operator's reconciliation functionality. runtimeClass string RuntimeClass contains the name of the RuntimeClass resource created by the operator. tuned string Tuned points to the Tuned custom resource object that contains the tuning values generated by this operator. 3.1.13. .status.conditions Description Conditions represents the latest available observations of current state. Type array 3.1.14. .status.conditions[] Description Condition represents the state of the operator's reconciliation functionality. Type object Required status type Property Type Description lastHeartbeatTime string lastTransitionTime string message string reason string status string type string ConditionType is the state of the operator's reconciliation functionality. 3.2. API endpoints The following API endpoints are available: /apis/performance.openshift.io/v2/performanceprofiles DELETE : delete collection of PerformanceProfile GET : list objects of kind PerformanceProfile POST : create a PerformanceProfile /apis/performance.openshift.io/v2/performanceprofiles/{name} DELETE : delete a PerformanceProfile GET : read the specified PerformanceProfile PATCH : partially update the specified PerformanceProfile PUT : replace the specified PerformanceProfile /apis/performance.openshift.io/v2/performanceprofiles/{name}/status GET : read status of the specified PerformanceProfile PATCH : partially update status of the specified PerformanceProfile PUT : replace status of the specified PerformanceProfile 3.2.1. /apis/performance.openshift.io/v2/performanceprofiles Table 3.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of PerformanceProfile Table 3.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind PerformanceProfile Table 3.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.5. HTTP responses HTTP code Reponse body 200 - OK PerformanceProfileList schema 401 - Unauthorized Empty HTTP method POST Description create a PerformanceProfile Table 3.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.7. Body parameters Parameter Type Description body PerformanceProfile schema Table 3.8. HTTP responses HTTP code Reponse body 200 - OK PerformanceProfile schema 201 - Created PerformanceProfile schema 202 - Accepted PerformanceProfile schema 401 - Unauthorized Empty 3.2.2. /apis/performance.openshift.io/v2/performanceprofiles/{name} Table 3.9. Global path parameters Parameter Type Description name string name of the PerformanceProfile Table 3.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a PerformanceProfile Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.12. Body parameters Parameter Type Description body DeleteOptions schema Table 3.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PerformanceProfile Table 3.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.15. HTTP responses HTTP code Reponse body 200 - OK PerformanceProfile schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PerformanceProfile Table 3.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.17. Body parameters Parameter Type Description body Patch schema Table 3.18. HTTP responses HTTP code Reponse body 200 - OK PerformanceProfile schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PerformanceProfile Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.20. Body parameters Parameter Type Description body PerformanceProfile schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK PerformanceProfile schema 201 - Created PerformanceProfile schema 401 - Unauthorized Empty 3.2.3. /apis/performance.openshift.io/v2/performanceprofiles/{name}/status Table 3.22. Global path parameters Parameter Type Description name string name of the PerformanceProfile Table 3.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified PerformanceProfile Table 3.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.25. HTTP responses HTTP code Reponse body 200 - OK PerformanceProfile schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified PerformanceProfile Table 3.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.27. Body parameters Parameter Type Description body Patch schema Table 3.28. HTTP responses HTTP code Reponse body 200 - OK PerformanceProfile schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified PerformanceProfile Table 3.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.30. Body parameters Parameter Type Description body PerformanceProfile schema Table 3.31. HTTP responses HTTP code Reponse body 200 - OK PerformanceProfile schema 201 - Created PerformanceProfile schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/node_apis/performanceprofile-performance-openshift-io-v2 |
Chapter 7. Deployments | Chapter 7. Deployments 7.1. Understanding Deployment and DeploymentConfig objects The Deployment and DeploymentConfig API objects in OpenShift Container Platform provide two similar but different methods for fine-grained management over common user applications. They are composed of the following separate API objects: A DeploymentConfig or Deployment object, either of which describes the desired state of a particular component of the application as a pod template. DeploymentConfig objects involve one or more replication controllers , which contain a point-in-time record of the state of a deployment as a pod template. Similarly, Deployment objects involve one or more replica sets , a successor of replication controllers. One or more pods, which represent an instance of a particular version of an application. 7.1.1. Building blocks of a deployment Deployments and deployment configs are enabled by the use of native Kubernetes API objects ReplicaSet and ReplicationController , respectively, as their building blocks. Users do not have to manipulate replication controllers, replica sets, or pods owned by DeploymentConfig objects or deployments. The deployment systems ensure changes are propagated appropriately. Tip If the existing deployment strategies are not suited for your use case and you must run manual steps during the lifecycle of your deployment, then you should consider creating a custom deployment strategy. The following sections provide further details on these objects. 7.1.1.1. Replication controllers A replication controller ensures that a specified number of replicas of a pod are running at all times. If pods exit or are deleted, the replication controller acts to instantiate more up to the defined number. Likewise, if there are more running than desired, it deletes as many as necessary to match the defined amount. A replication controller configuration consists of: The number of replicas desired, which can be adjusted at run time. A Pod definition to use when creating a replicated pod. A selector for identifying managed pods. A selector is a set of labels assigned to the pods that are managed by the replication controller. These labels are included in the Pod definition that the replication controller instantiates. The replication controller uses the selector to determine how many instances of the pod are already running in order to adjust as needed. The replication controller does not perform auto-scaling based on load or traffic, as it does not track either. Rather, this requires its replica count to be adjusted by an external auto-scaler. The following is an example definition of a replication controller: apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always 1 The number of copies of the pod to run. 2 The label selector of the pod to run. 3 A template for the pod the controller creates. 4 Labels on the pod should include those from the label selector. 5 The maximum name length after expanding any parameters is 63 characters. 7.1.1.2. Replica sets Similar to a replication controller, a ReplicaSet is a native Kubernetes API object that ensures a specified number of pod replicas are running at any given time. The difference between a replica set and a replication controller is that a replica set supports set-based selector requirements whereas a replication controller only supports equality-based selector requirements. Note Only use replica sets if you require custom update orchestration or do not require updates at all. Otherwise, use deployments. Replica sets can be used independently, but are used by deployments to orchestrate pod creation, deletion, and updates. Deployments manage their replica sets automatically, provide declarative updates to pods, and do not have to manually manage the replica sets that they create. The following is an example ReplicaSet definition: apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend-1 labels: tier: frontend spec: replicas: 3 selector: 1 matchLabels: 2 tier: frontend matchExpressions: 3 - {key: tier, operator: In, values: [frontend]} template: metadata: labels: tier: frontend spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always 1 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. 2 Equality-based selector to specify resources with labels that match the selector. 3 Set-based selector to filter keys. This selects all resources with key equal to tier and value equal to frontend . 7.1.2. DeploymentConfig objects Building on replication controllers, OpenShift Container Platform adds expanded support for the software development and deployment lifecycle with the concept of DeploymentConfig objects. In the simplest case, a DeploymentConfig object creates a new replication controller and lets it start up pods. However, OpenShift Container Platform deployments from DeploymentConfig objects also provide the ability to transition from an existing deployment of an image to a new one and also define hooks to be run before or after creating the replication controller. The DeploymentConfig deployment system provides the following capabilities: A DeploymentConfig object, which is a template for running applications. Triggers that drive automated deployments in response to events. User-customizable deployment strategies to transition from the version to the new version. A strategy runs inside a pod commonly referred as the deployment process. A set of hooks (lifecycle hooks) for executing custom behavior in different points during the lifecycle of a deployment. Versioning of your application to support rollbacks either manually or automatically in case of deployment failure. Manual replication scaling and autoscaling. When you create a DeploymentConfig object, a replication controller is created representing the DeploymentConfig object's pod template. If the deployment changes, a new replication controller is created with the latest pod template, and a deployment process runs to scale down the old replication controller and scale up the new one. Instances of your application are automatically added and removed from both service load balancers and routers as they are created. As long as your application supports graceful shutdown when it receives the TERM signal, you can ensure that running user connections are given a chance to complete normally. The OpenShift Container Platform DeploymentConfig object defines the following details: The elements of a ReplicationController definition. Triggers for creating a new deployment automatically. The strategy for transitioning between deployments. Lifecycle hooks. Each time a deployment is triggered, whether manually or automatically, a deployer pod manages the deployment (including scaling down the old replication controller, scaling up the new one, and running hooks). The deployment pod remains for an indefinite amount of time after it completes the deployment to retain its logs of the deployment. When a deployment is superseded by another, the replication controller is retained to enable easy rollback if needed. Example DeploymentConfig definition apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: frontend spec: replicas: 5 selector: name: frontend template: { ... } triggers: - type: ConfigChange 1 - imageChangeParams: automatic: true containerNames: - helloworld from: kind: ImageStreamTag name: hello-openshift:latest type: ImageChange 2 strategy: type: Rolling 3 1 A configuration change trigger results in a new replication controller whenever changes are detected in the pod template of the deployment configuration. 2 An image change trigger causes a new deployment to be created each time a new version of the backing image is available in the named image stream. 3 The default Rolling strategy makes a downtime-free transition between deployments. 7.1.3. Deployments Kubernetes provides a first-class, native API object type in OpenShift Container Platform called Deployment . Deployment objects serve as a descendant of the OpenShift Container Platform-specific DeploymentConfig object. Like DeploymentConfig objects, Deployment objects describe the desired state of a particular component of an application as a pod template. Deployments create replica sets, which orchestrate pod lifecycles. For example, the following deployment definition creates a replica set to bring up one hello-openshift pod: Deployment definition apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: openshift/hello-openshift:latest ports: - containerPort: 80 7.1.4. Comparing Deployment and DeploymentConfig objects Both Kubernetes Deployment objects and OpenShift Container Platform-provided DeploymentConfig objects are supported in OpenShift Container Platform; however, it is recommended to use Deployment objects unless you need a specific feature or behavior provided by DeploymentConfig objects. The following sections go into more detail on the differences between the two object types to further help you decide which type to use. 7.1.4.1. Design One important difference between Deployment and DeploymentConfig objects is the properties of the CAP theorem that each design has chosen for the rollout process. DeploymentConfig objects prefer consistency, whereas Deployments objects take availability over consistency. For DeploymentConfig objects, if a node running a deployer pod goes down, it will not get replaced. The process waits until the node comes back online or is manually deleted. Manually deleting the node also deletes the corresponding pod. This means that you can not delete the pod to unstick the rollout, as the kubelet is responsible for deleting the associated pod. However, deployment rollouts are driven from a controller manager. The controller manager runs in high availability mode on masters and uses leader election algorithms to value availability over consistency. During a failure it is possible for other masters to act on the same deployment at the same time, but this issue will be reconciled shortly after the failure occurs. 7.1.4.2. DeploymentConfig object-specific features Automatic rollbacks Currently, deployments do not support automatically rolling back to the last successfully deployed replica set in case of a failure. Triggers Deployments have an implicit config change trigger in that every change in the pod template of a deployment automatically triggers a new rollout. If you do not want new rollouts on pod template changes, pause the deployment: USD oc rollout pause deployments/<name> Lifecycle hooks Deployments do not yet support any lifecycle hooks. Custom strategies Deployments do not support user-specified custom deployment strategies yet. 7.1.4.3. Deployment-specific features Rollover The deployment process for Deployment objects is driven by a controller loop, in contrast to DeploymentConfig objects which use deployer pods for every new rollout. This means that the Deployment object can have as many active replica sets as possible, and eventually the deployment controller will scale down all old replica sets and scale up the newest one. DeploymentConfig objects can have at most one deployer pod running, otherwise multiple deployers end up conflicting while trying to scale up what they think should be the newest replication controller. Because of this, only two replication controllers can be active at any point in time. Ultimately, this translates to faster rapid rollouts for Deployment objects. Proportional scaling Because the deployment controller is the sole source of truth for the sizes of new and old replica sets owned by a Deployment object, it is able to scale ongoing rollouts. Additional replicas are distributed proportionally based on the size of each replica set. DeploymentConfig objects cannot be scaled when a rollout is ongoing because the controller will end up having issues with the deployer process about the size of the new replication controller. Pausing mid-rollout Deployments can be paused at any point in time, meaning you can also pause ongoing rollouts. On the other hand, you cannot pause deployer pods currently, so if you try to pause a deployment in the middle of a rollout, the deployer process will not be affected and will continue until it finishes. 7.2. Managing deployment processes 7.2.1. Managing DeploymentConfig objects DeploymentConfig objects can be managed from the OpenShift Container Platform web console's Workloads page or using the oc CLI. The following procedures show CLI usage unless otherwise stated. 7.2.1.1. Starting a deployment You can start a rollout to begin the deployment process of your application. Procedure To start a new deployment process from an existing DeploymentConfig object, run the following command: USD oc rollout latest dc/<name> Note If a deployment process is already in progress, the command displays a message and a new replication controller will not be deployed. 7.2.1.2. Viewing a deployment You can view a deployment to get basic information about all the available revisions of your application. Procedure To show details about all recently created replication controllers for the provided DeploymentConfig object, including any currently running deployment process, run the following command: USD oc rollout history dc/<name> To view details specific to a revision, add the --revision flag: USD oc rollout history dc/<name> --revision=1 For more detailed information about a DeploymentConfig object and its latest revision, use the oc describe command: USD oc describe dc <name> 7.2.1.3. Retrying a deployment If the current revision of your DeploymentConfig object failed to deploy, you can restart the deployment process. Procedure To restart a failed deployment process: USD oc rollout retry dc/<name> If the latest revision of it was deployed successfully, the command displays a message and the deployment process is not retried. Note Retrying a deployment restarts the deployment process and does not create a new deployment revision. The restarted replication controller has the same configuration it had when it failed. 7.2.1.4. Rolling back a deployment Rollbacks revert an application back to a revision and can be performed using the REST API, the CLI, or the web console. Procedure To rollback to the last successful deployed revision of your configuration: USD oc rollout undo dc/<name> The DeploymentConfig object's template is reverted to match the deployment revision specified in the undo command, and a new replication controller is started. If no revision is specified with --to-revision , then the last successfully deployed revision is used. Image change triggers on the DeploymentConfig object are disabled as part of the rollback to prevent accidentally starting a new deployment process soon after the rollback is complete. To re-enable the image change triggers: USD oc set triggers dc/<name> --auto Note Deployment configs also support automatically rolling back to the last successful revision of the configuration in case the latest deployment process fails. In that case, the latest template that failed to deploy stays intact by the system and it is up to users to fix their configurations. 7.2.1.5. Executing commands inside a container You can add a command to a container, which modifies the container's startup behavior by overruling the image's ENTRYPOINT . This is different from a lifecycle hook, which instead can be run once per deployment at a specified time. Procedure Add the command parameters to the spec field of the DeploymentConfig object. You can also add an args field, which modifies the command (or the ENTRYPOINT if command does not exist). spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>' For example, to execute the java command with the -jar and /opt/app-root/springboots2idemo.jar arguments: spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar 7.2.1.6. Viewing deployment logs Procedure To stream the logs of the latest revision for a given DeploymentConfig object: USD oc logs -f dc/<name> If the latest revision is running or failed, the command returns the logs of the process that is responsible for deploying your pods. If it is successful, it returns the logs from a pod of your application. You can also view logs from older failed deployment processes, if and only if these processes (old replication controllers and their deployer pods) exist and have not been pruned or deleted manually: USD oc logs --version=1 dc/<name> 7.2.1.7. Deployment triggers A DeploymentConfig object can contain triggers, which drive the creation of new deployment processes in response to events inside the cluster. Warning If no triggers are defined on a DeploymentConfig object, a config change trigger is added by default. If triggers are defined as an empty field, deployments must be started manually. Config change deployment triggers The config change trigger results in a new replication controller whenever configuration changes are detected in the pod template of the DeploymentConfig object. Note If a config change trigger is defined on a DeploymentConfig object, the first replication controller is automatically created soon after the DeploymentConfig object itself is created and it is not paused. Config change deployment trigger triggers: - type: "ConfigChange" Image change deployment triggers The image change trigger results in a new replication controller whenever the content of an image stream tag changes (when a new version of the image is pushed). Image change deployment trigger triggers: - type: "ImageChange" imageChangeParams: automatic: true 1 from: kind: "ImageStreamTag" name: "origin-ruby-sample:latest" namespace: "myproject" containerNames: - "helloworld" 1 If the imageChangeParams.automatic field is set to false , the trigger is disabled. With the above example, when the latest tag value of the origin-ruby-sample image stream changes and the new image value differs from the current image specified in the DeploymentConfig object's helloworld container, a new replication controller is created using the new image for the helloworld container. Note If an image change trigger is defined on a DeploymentConfig object (with a config change trigger and automatic=false , or with automatic=true ) and the image stream tag pointed by the image change trigger does not exist yet, the initial deployment process will automatically start as soon as an image is imported or pushed by a build to the image stream tag. 7.2.1.7.1. Setting deployment triggers Procedure You can set deployment triggers for a DeploymentConfig object using the oc set triggers command. For example, to set a image change trigger, use the following command: USD oc set triggers dc/<dc_name> \ --from-image=<project>/<image>:<tag> -c <container_name> 7.2.1.8. Setting deployment resources A deployment is completed by a pod that consumes resources (memory, CPU, and ephemeral storage) on a node. By default, pods consume unbounded node resources. However, if a project specifies default container limits, then pods consume resources up to those limits. Note The minimum memory limit for a deployment is 12 MB. If a container fails to start due to a Cannot allocate memory pod event, the memory limit is too low. Either increase or remove the memory limit. Removing the limit allows pods to consume unbounded node resources. You can also limit resource use by specifying resource limits as part of the deployment strategy. Deployment resources can be used with the recreate, rolling, or custom deployment strategies. Procedure In the following example, each of resources , cpu , memory , and ephemeral-storage is optional: type: "Recreate" resources: limits: cpu: "100m" 1 memory: "256Mi" 2 ephemeral-storage: "1Gi" 3 1 cpu is in CPU units: 100m represents 0.1 CPU units (100 * 1e-3). 2 memory is in bytes: 256Mi represents 268435456 bytes (256 * 2 ^ 20). 3 ephemeral-storage is in bytes: 1Gi represents 1073741824 bytes (2 ^ 30). However, if a quota has been defined for your project, one of the following two items is required: A resources section set with an explicit requests : type: "Recreate" resources: requests: 1 cpu: "100m" memory: "256Mi" ephemeral-storage: "1Gi" 1 The requests object contains the list of resources that correspond to the list of resources in the quota. A limit range defined in your project, where the defaults from the LimitRange object apply to pods created during the deployment process. To set deployment resources, choose one of the above options. Otherwise, deploy pod creation fails, citing a failure to satisfy quota. Additional resources For more information about resource limits and requests, see Understanding managing application memory . 7.2.1.9. Scaling manually In addition to rollbacks, you can exercise fine-grained control over the number of replicas by manually scaling them. Note Pods can also be auto-scaled using the oc autoscale command. Procedure To manually scale a DeploymentConfig object, use the oc scale command. For example, the following command sets the replicas in the frontend DeploymentConfig object to 3 . USD oc scale dc frontend --replicas=3 The number of replicas eventually propagates to the desired and current state of the deployment configured by the DeploymentConfig object frontend . 7.2.1.10. Accessing private repositories from DeploymentConfig objects You can add a secret to your DeploymentConfig object so that it can access images from a private repository. This procedure shows the OpenShift Container Platform web console method. Procedure Create a new project. From the Workloads page, create a secret that contains credentials for accessing a private image repository. Create a DeploymentConfig object. On the DeploymentConfig object editor page, set the Pull Secret and save your changes. 7.2.1.11. Assigning pods to specific nodes You can use node selectors in conjunction with labeled nodes to control pod placement. Cluster administrators can set the default node selector for a project in order to restrict pod placement to specific nodes. As a developer, you can set a node selector on a Pod configuration to restrict nodes even further. Procedure To add a node selector when creating a pod, edit the Pod configuration, and add the nodeSelector value. This can be added to a single Pod configuration, or in a Pod template: apiVersion: v1 kind: Pod spec: nodeSelector: disktype: ssd ... Pods created when the node selector is in place are assigned to nodes with the specified labels. The labels specified here are used in conjunction with the labels added by a cluster administrator. For example, if a project has the type=user-node and region=east labels added to a project by the cluster administrator, and you add the above disktype: ssd label to a pod, the pod is only ever scheduled on nodes that have all three labels. Note Labels can only be set to one value, so setting a node selector of region=west in a Pod configuration that has region=east as the administrator-set default, results in a pod that will never be scheduled. 7.2.1.12. Running a pod with a different service account You can run a pod with a service account other than the default. Procedure Edit the DeploymentConfig object: USD oc edit dc/<deployment_config> Add the serviceAccount and serviceAccountName parameters to the spec field, and specify the service account you want to use: spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account> 7.3. Using deployment strategies A deployment strategy is a way to change or upgrade an application. The aim is to make the change without downtime in a way that the user barely notices the improvements. Because the end user usually accesses the application through a route handled by a router, the deployment strategy can focus on DeploymentConfig object features or routing features. Strategies that focus on the deployment impact all routes that use the application. Strategies that use router features target individual routes. Many deployment strategies are supported through the DeploymentConfig object, and some additional strategies are supported through router features. Deployment strategies are discussed in this section. Choosing a deployment strategy Consider the following when choosing a deployment strategy: Long-running connections must be handled gracefully. Database conversions can be complex and must be done and rolled back along with the application. If the application is a hybrid of microservices and traditional components, downtime might be required to complete the transition. You must have the infrastructure to do this. If you have a non-isolated test environment, you can break both new and old versions. A deployment strategy uses readiness checks to determine if a new pod is ready for use. If a readiness check fails, the DeploymentConfig object retries to run the pod until it times out. The default timeout is 10m , a value set in TimeoutSeconds in dc.spec.strategy.*params . 7.3.1. Rolling strategy A rolling deployment slowly replaces instances of the version of an application with instances of the new version of the application. The rolling strategy is the default deployment strategy used if no strategy is specified on a DeploymentConfig object. A rolling deployment typically waits for new pods to become ready via a readiness check before scaling down the old components. If a significant issue occurs, the rolling deployment can be aborted. When to use a rolling deployment: When you want to take no downtime during an application update. When your application supports having old code and new code running at the same time. A rolling deployment means you have both old and new versions of your code running at the same time. This typically requires that your application handle N-1 compatibility. Example rolling strategy definition strategy: type: Rolling rollingParams: updatePeriodSeconds: 1 1 intervalSeconds: 1 2 timeoutSeconds: 120 3 maxSurge: "20%" 4 maxUnavailable: "10%" 5 pre: {} 6 post: {} 1 The time to wait between individual pod updates. If unspecified, this value defaults to 1 . 2 The time to wait between polling the deployment status after update. If unspecified, this value defaults to 1 . 3 The time to wait for a scaling event before giving up. Optional; the default is 600 . Here, giving up means automatically rolling back to the complete deployment. 4 maxSurge is optional and defaults to 25% if not specified. See the information below the following procedure. 5 maxUnavailable is optional and defaults to 25% if not specified. See the information below the following procedure. 6 pre and post are both lifecycle hooks. The rolling strategy: Executes any pre lifecycle hook. Scales up the new replication controller based on the surge count. Scales down the old replication controller based on the max unavailable count. Repeats this scaling until the new replication controller has reached the desired replica count and the old replication controller has been scaled to zero. Executes any post lifecycle hook. Important When scaling down, the rolling strategy waits for pods to become ready so it can decide whether further scaling would affect availability. If scaled up pods never become ready, the deployment process will eventually time out and result in a deployment failure. The maxUnavailable parameter is the maximum number of pods that can be unavailable during the update. The maxSurge parameter is the maximum number of pods that can be scheduled above the original number of pods. Both parameters can be set to either a percentage (e.g., 10% ) or an absolute value (e.g., 2 ). The default value for both is 25% . These parameters allow the deployment to be tuned for availability and speed. For example: maxUnavailable*=0 and maxSurge*=20% ensures full capacity is maintained during the update and rapid scale up. maxUnavailable*=10% and maxSurge*=0 performs an update using no extra capacity (an in-place update). maxUnavailable*=10% and maxSurge*=10% scales up and down quickly with some potential for capacity loss. Generally, if you want fast rollouts, use maxSurge . If you have to take into account resource quota and can accept partial unavailability, use maxUnavailable . 7.3.1.1. Canary deployments All rolling deployments in OpenShift Container Platform are canary deployments ; a new version (the canary) is tested before all of the old instances are replaced. If the readiness check never succeeds, the canary instance is removed and the DeploymentConfig object will be automatically rolled back. The readiness check is part of the application code and can be as sophisticated as necessary to ensure the new instance is ready to be used. If you must implement more complex checks of the application (such as sending real user workloads to the new instance), consider implementing a custom deployment or using a blue-green deployment strategy. 7.3.1.2. Creating a rolling deployment Rolling deployments are the default type in OpenShift Container Platform. You can create a rolling deployment using the CLI. Procedure Create an application based on the example deployment images found in Quay.io : USD oc new-app quay.io/openshifttest/deployment-example:latest If you have the router installed, make the application available via a route or use the service IP directly. USD oc expose svc/deployment-example Browse to the application at deployment-example.<project>.<router_domain> to verify you see the v1 image. Scale the DeploymentConfig object up to three replicas: USD oc scale dc/deployment-example --replicas=3 Trigger a new deployment automatically by tagging a new version of the example as the latest tag: USD oc tag deployment-example:v2 deployment-example:latest In your browser, refresh the page until you see the v2 image. When using the CLI, the following command shows how many pods are on version 1 and how many are on version 2. In the web console, the pods are progressively added to v2 and removed from v1: USD oc describe dc deployment-example During the deployment process, the new replication controller is incrementally scaled up. After the new pods are marked as ready (by passing their readiness check), the deployment process continues. If the pods do not become ready, the process aborts, and the deployment rolls back to its version. 7.3.1.3. Starting a rolling deployment using the Developer perspective Prerequisites Ensure that you are in the Developer perspective of the web console. Ensure that you have created an application using the Add view and see it deployed in the Topology view. Procedure To start a rolling deployment to upgrade an application: In the Topology view of the Developer perspective, click on the application node to see the Overview tab in the side panel. Note that the Update Strategy is set to the default Rolling strategy. In the Actions drop-down menu, select Start Rollout to start a rolling update. The rolling deployment spins up the new version of the application and then terminates the old one. Figure 7.1. Rolling update Additional resources Creating and deploying applications on OpenShift Container Platform using the Developer perspective Viewing the applications in your project, verifying their deployment status, and interacting with them in the Topology view 7.3.2. Recreate strategy The recreate strategy has basic rollout behavior and supports lifecycle hooks for injecting code into the deployment process. Example recreate strategy definition strategy: type: Recreate recreateParams: 1 pre: {} 2 mid: {} post: {} 1 recreateParams are optional. 2 pre , mid , and post are lifecycle hooks. The recreate strategy: Executes any pre lifecycle hook. Scales down the deployment to zero. Executes any mid lifecycle hook. Scales up the new deployment. Executes any post lifecycle hook. Important During scale up, if the replica count of the deployment is greater than one, the first replica of the deployment will be validated for readiness before fully scaling up the deployment. If the validation of the first replica fails, the deployment will be considered a failure. When to use a recreate deployment: When you must run migrations or other data transformations before your new code starts. When you do not support having new and old versions of your application code running at the same time. When you want to use a RWO volume, which is not supported being shared between multiple replicas. A recreate deployment incurs downtime because, for a brief period, no instances of your application are running. However, your old code and new code do not run at the same time. 7.3.3. Starting a recreate deployment using the Developer perspective You can switch the deployment strategy from the default rolling update to a recreate update using the Developer perspective in the web console. Prerequisites Ensure that you are in the Developer perspective of the web console. Ensure that you have created an application using the Add view and see it deployed in the Topology view. Procedure To switch to a recreate update strategy and to upgrade an application: In the Actions drop-down menu, select Edit Deployment Config to see the deployment configuration details of the application. In the YAML editor, change the spec.strategy.type to Recreate and click Save . In the Topology view, select the node to see the Overview tab in the side panel. The Update Strategy is now set to Recreate . Use the Actions drop-down menu to select Start Rollout to start an update using the recreate strategy. The recreate strategy first terminates pods for the older version of the application and then spins up pods for the new version. Figure 7.2. Recreate update Additional resources Creating and deploying applications on OpenShift Container Platform using the Developer perspective Viewing the applications in your project, verifying their deployment status, and interacting with them in the Topology view 7.3.4. Custom strategy The custom strategy allows you to provide your own deployment behavior. Example custom strategy definition strategy: type: Custom customParams: image: organization/strategy command: [ "command", "arg1" ] environment: - name: ENV_1 value: VALUE_1 In the above example, the organization/strategy container image provides the deployment behavior. The optional command array overrides any CMD directive specified in the image's Dockerfile . The optional environment variables provided are added to the execution environment of the strategy process. Additionally, OpenShift Container Platform provides the following environment variables to the deployment process: Environment variable Description OPENSHIFT_DEPLOYMENT_NAME The name of the new deployment, a replication controller. OPENSHIFT_DEPLOYMENT_NAMESPACE The name space of the new deployment. The replica count of the new deployment will initially be zero. The responsibility of the strategy is to make the new deployment active using the logic that best serves the needs of the user. Alternatively, use the customParams object to inject the custom deployment logic into the existing deployment strategies. Provide a custom shell script logic and call the openshift-deploy binary. Users do not have to supply their custom deployer container image; in this case, the default OpenShift Container Platform deployer image is used instead: strategy: type: Rolling customParams: command: - /bin/sh - -c - | set -e openshift-deploy --until=50% echo Halfway there openshift-deploy echo Complete This results in following deployment: Started deployment #2 --> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-2 up to 1 --> Reached 50% (currently 50%) Halfway there --> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-1 down to 1 Scaling custom-deployment-2 up to 2 Scaling custom-deployment-1 down to 0 --> Success Complete If the custom deployment strategy process requires access to the OpenShift Container Platform API or the Kubernetes API the container that executes the strategy can use the service account token available inside the container for authentication. 7.3.5. Lifecycle hooks The rolling and recreate strategies support lifecycle hooks , or deployment hooks, which allow behavior to be injected into the deployment process at predefined points within the strategy: Example pre lifecycle hook pre: failurePolicy: Abort execNewPod: {} 1 1 execNewPod is a pod-based lifecycle hook. Every hook has a failure policy , which defines the action the strategy should take when a hook failure is encountered: Abort The deployment process will be considered a failure if the hook fails. Retry The hook execution should be retried until it succeeds. Ignore Any hook failure should be ignored and the deployment should proceed. Hooks have a type-specific field that describes how to execute the hook. Currently, pod-based hooks are the only supported hook type, specified by the execNewPod field. Pod-based lifecycle hook Pod-based lifecycle hooks execute hook code in a new pod derived from the template in a DeploymentConfig object. The following simplified example deployment uses the rolling strategy. Triggers and some other minor details are omitted for brevity: kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: template: metadata: labels: name: frontend spec: containers: - name: helloworld image: openshift/origin-ruby-sample replicas: 5 selector: name: frontend strategy: type: Rolling rollingParams: pre: failurePolicy: Abort execNewPod: containerName: helloworld 1 command: [ "/usr/bin/command", "arg1", "arg2" ] 2 env: 3 - name: CUSTOM_VAR1 value: custom_value1 volumes: - data 4 1 The helloworld name refers to spec.template.spec.containers[0].name . 2 This command overrides any ENTRYPOINT defined by the openshift/origin-ruby-sample image. 3 env is an optional set of environment variables for the hook container. 4 volumes is an optional set of volume references for the hook container. In this example, the pre hook will be executed in a new pod using the openshift/origin-ruby-sample image from the helloworld container. The hook pod has the following properties: The hook command is /usr/bin/command arg1 arg2 . The hook container has the CUSTOM_VAR1=custom_value1 environment variable. The hook failure policy is Abort , meaning the deployment process fails if the hook fails. The hook pod inherits the data volume from the DeploymentConfig object pod. 7.3.5.1. Setting lifecycle hooks You can set lifecycle hooks, or deployment hooks, for a deployment using the CLI. Procedure Use the oc set deployment-hook command to set the type of hook you want: --pre , --mid , or --post . For example, to set a pre-deployment hook: USD oc set deployment-hook dc/frontend \ --pre -c helloworld -e CUSTOM_VAR1=custom_value1 \ --volumes data --failure-policy=abort -- /usr/bin/command arg1 arg2 7.4. Using route-based deployment strategies Deployment strategies provide a way for the application to evolve. Some strategies use Deployment objects to make changes that are seen by users of all routes that resolve to the application. Other advanced strategies, such as the ones described in this section, use router features in conjunction with Deployment objects to impact specific routes. The most common route-based strategy is to use a blue-green deployment . The new version (the green version) is brought up for testing and evaluation, while the users still use the stable version (the blue version). When ready, the users are switched to the green version. If a problem arises, you can switch back to the blue version. A common alternative strategy is to use A/B versions that are both active at the same time and some users use one version, and some users use the other version. This can be used for experimenting with user interface changes and other features to get user feedback. It can also be used to verify proper operation in a production context where problems impact a limited number of users. A canary deployment tests the new version but when a problem is detected it quickly falls back to the version. This can be done with both of the above strategies. The route-based deployment strategies do not scale the number of pods in the services. To maintain desired performance characteristics the deployment configurations might have to be scaled. 7.4.1. Proxy shards and traffic splitting In production environments, you can precisely control the distribution of traffic that lands on a particular shard. When dealing with large numbers of instances, you can use the relative scale of individual shards to implement percentage based traffic. That combines well with a proxy shard , which forwards or splits the traffic it receives to a separate service or application running elsewhere. In the simplest configuration, the proxy forwards requests unchanged. In more complex setups, you can duplicate the incoming requests and send to both a separate cluster as well as to a local instance of the application, and compare the result. Other patterns include keeping the caches of a DR installation warm, or sampling incoming traffic for analysis purposes. Any TCP (or UDP) proxy could be run under the desired shard. Use the oc scale command to alter the relative number of instances serving requests under the proxy shard. For more complex traffic management, consider customizing the OpenShift Container Platform router with proportional balancing capabilities. 7.4.2. N-1 compatibility Applications that have new code and old code running at the same time must be careful to ensure that data written by the new code can be read and handled (or gracefully ignored) by the old version of the code. This is sometimes called schema evolution and is a complex problem. This can take many forms: data stored on disk, in a database, in a temporary cache, or that is part of a user's browser session. While most web applications can support rolling deployments, it is important to test and design your application to handle it. For some applications, the period of time that old code and new code is running side by side is short, so bugs or some failed user transactions are acceptable. For others, the failure pattern may result in the entire application becoming non-functional. One way to validate N-1 compatibility is to use an A/B deployment: run the old code and new code at the same time in a controlled way in a test environment, and verify that traffic that flows to the new deployment does not cause failures in the old deployment. 7.4.3. Graceful termination OpenShift Container Platform and Kubernetes give application instances time to shut down before removing them from load balancing rotations. However, applications must ensure they cleanly terminate user connections as well before they exit. On shutdown, OpenShift Container Platform sends a TERM signal to the processes in the container. Application code, on receiving SIGTERM , stop accepting new connections. This ensures that load balancers route traffic to other active instances. The application code then waits until all open connections are closed, or gracefully terminate individual connections at the opportunity, before exiting. After the graceful termination period expires, a process that has not exited is sent the KILL signal, which immediately ends the process. The terminationGracePeriodSeconds attribute of a pod or pod template controls the graceful termination period (default 30 seconds) and can be customized per application as necessary. 7.4.4. Blue-green deployments Blue-green deployments involve running two versions of an application at the same time and moving traffic from the in-production version (the blue version) to the newer version (the green version). You can use a rolling strategy or switch services in a route. Because many applications depend on persistent data, you must have an application that supports N-1 compatibility , which means it shares data and implements live migration between the database, store, or disk by creating two copies of the data layer. Consider the data used in testing the new version. If it is the production data, a bug in the new version can break the production version. 7.4.4.1. Setting up a blue-green deployment Blue-green deployments use two Deployment objects. Both are running, and the one in production depends on the service the route specifies, with each Deployment object exposed to a different service. Note Routes are intended for web (HTTP and HTTPS) traffic, so this technique is best suited for web applications. You can create a new route to the new version and test it. When ready, change the service in the production route to point to the new service and the new (green) version is live. If necessary, you can roll back to the older (blue) version by switching the service back to the version. Procedure Create two independent application components. Create a copy of the example application running the v1 image under the example-blue service: USD oc new-app openshift/deployment-example:v1 --name=example-blue Create a second copy that uses the v2 image under the example-green service: USD oc new-app openshift/deployment-example:v2 --name=example-green Create a route that points to the old service: USD oc expose svc/example-blue --name=bluegreen-example Browse to the application at bluegreen-example-<project>.<router_domain> to verify you see the v1 image. Edit the route and change the service name to example-green : USD oc patch route/bluegreen-example -p '{"spec":{"to":{"name":"example-green"}}}' To verify that the route has changed, refresh the browser until you see the v2 image. 7.4.5. A/B deployments The A/B deployment strategy lets you try a new version of the application in a limited way in the production environment. You can specify that the production version gets most of the user requests while a limited fraction of requests go to the new version. Because you control the portion of requests to each version, as testing progresses you can increase the fraction of requests to the new version and ultimately stop using the version. As you adjust the request load on each version, the number of pods in each service might have to be scaled as well to provide the expected performance. In addition to upgrading software, you can use this feature to experiment with versions of the user interface. Since some users get the old version and some the new, you can evaluate the user's reaction to the different versions to inform design decisions. For this to be effective, both the old and new versions must be similar enough that both can run at the same time. This is common with bug fix releases and when new features do not interfere with the old. The versions require N-1 compatibility to properly work together. OpenShift Container Platform supports N-1 compatibility through the web console as well as the CLI. 7.4.5.1. Load balancing for A/B testing The user sets up a route with multiple services. Each service handles a version of the application. Each service is assigned a weight and the portion of requests to each service is the service_weight divided by the sum_of_weights . The weight for each service is distributed to the service's endpoints so that the sum of the endpoint weights is the service weight . The route can have up to four services. The weight for the service can be between 0 and 256 . When the weight is 0 , the service does not participate in load-balancing but continues to serve existing persistent connections. When the service weight is not 0 , each endpoint has a minimum weight of 1 . Because of this, a service with a lot of endpoints can end up with higher weight than intended. In this case, reduce the number of pods to get the expected load balance weight . Procedure To set up the A/B environment: Create the two applications and give them different names. Each creates a Deployment object. The applications are versions of the same program; one is usually the current production version and the other the proposed new version. Create the first application. The following example creates an application called ab-example-a : USD oc new-app openshift/deployment-example --name=ab-example-a Create the second application: USD oc new-app openshift/deployment-example:v2 --name=ab-example-b Both applications are deployed and services are created. Make the application available externally via a route. At this point, you can expose either. It can be convenient to expose the current production version first and later modify the route to add the new version. USD oc expose svc/ab-example-a Browse to the application at ab-example-a.<project>.<router_domain> to verify that you see the expected version. When you deploy the route, the router balances the traffic according to the weights specified for the services. At this point, there is a single service with default weight=1 so all requests go to it. Adding the other service as an alternateBackends and adjusting the weights brings the A/B setup to life. This can be done by the oc set route-backends command or by editing the route. Setting the oc set route-backend to 0 means the service does not participate in load-balancing, but continues to serve existing persistent connections. Note Changes to the route just change the portion of traffic to the various services. You might have to scale the deployment to adjust the number of pods to handle the anticipated loads. To edit the route, run: USD oc edit route <route_name> Example output ... metadata: name: route-alternate-service annotations: haproxy.router.openshift.io/balance: roundrobin spec: host: ab-example.my-project.my-domain to: kind: Service name: ab-example-a weight: 10 alternateBackends: - kind: Service name: ab-example-b weight: 15 ... 7.4.5.1.1. Managing weights of an existing route using the web console Procedure Navigate to the Networking Routes page. Click the Actions menu to the route you want to edit and select Edit Route . Edit the YAML file. Update the weight to be an integer between 0 and 256 that specifies the relative weight of the target against other target reference objects. The value 0 suppresses requests to this back end. The default is 100 . Run oc explain routes.spec.alternateBackends for more information about the options. Click Save . 7.4.5.1.2. Managing weights of an new route using the web console Navigate to the Networking Routes page. Click Create Route . Enter the route Name . Select the Service . Click Add Alternate Service . Enter a value for Weight and Alternate Service Weight . Enter a number between 0 and 255 that depicts relative weight compared with other targets. The default is 100 . Select the Target Port . Click Create . 7.4.5.1.3. Managing weights using the CLI Procedure To manage the services and corresponding weights load balanced by the route, use the oc set route-backends command: USD oc set route-backends ROUTENAME \ [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options] For example, the following sets ab-example-a as the primary service with weight=198 and ab-example-b as the first alternate service with a weight=2 : USD oc set route-backends ab-example ab-example-a=198 ab-example-b=2 This means 99% of traffic is sent to service ab-example-a and 1% to service ab-example-b . This command does not scale the deployment. You might be required to do so to have enough pods to handle the request load. Run the command with no flags to verify the current configuration: USD oc set route-backends ab-example Example output NAME KIND TO WEIGHT routes/ab-example Service ab-example-a 198 (99%) routes/ab-example Service ab-example-b 2 (1%) To alter the weight of an individual service relative to itself or to the primary service, use the --adjust flag. Specifying a percentage adjusts the service relative to either the primary or the first alternate (if you specify the primary). If there are other backends, their weights are kept proportional to the changed. The following example alters the weight of ab-example-a and ab-example-b services: USD oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10 Alternatively, alter the weight of a service by specifying a percentage: USD oc set route-backends ab-example --adjust ab-example-b=5% By specifying + before the percentage declaration, you can adjust a weighting relative to the current setting. For example: USD oc set route-backends ab-example --adjust ab-example-b=+15% The --equal flag sets the weight of all services to 100 : USD oc set route-backends ab-example --equal The --zero flag sets the weight of all services to 0 . All requests then return with a 503 error. Note Not all routers may support multiple or weighted backends. 7.4.5.1.4. One service, multiple Deployment objects Procedure Create a new application, adding a label ab-example=true that will be common to all shards: USD oc new-app openshift/deployment-example --name=ab-example-a --as-deployment-config=true --labels=ab-example=true --env=SUBTITLE\=shardA USD oc delete svc/ab-example-a The application is deployed and a service is created. This is the first shard. Make the application available via a route, or use the service IP directly: USD oc expose deployment ab-example-a --name=ab-example --selector=ab-example\=true USD oc expose service ab-example Browse to the application at ab-example-<project_name>.<router_domain> to verify you see the v1 image. Create a second shard based on the same source image and label as the first shard, but with a different tagged version and unique environment variables: USD oc new-app openshift/deployment-example:v2 \ --name=ab-example-b --labels=ab-example=true \ SUBTITLE="shard B" COLOR="red" --as-deployment-config=true USD oc delete svc/ab-example-b At this point, both sets of pods are being served under the route. However, because both browsers (by leaving a connection open) and the router (by default, through a cookie) attempt to preserve your connection to a back-end server, you might not see both shards being returned to you. To force your browser to one or the other shard: Use the oc scale command to reduce replicas of ab-example-a to 0 . USD oc scale dc/ab-example-a --replicas=0 Refresh your browser to show v2 and shard B (in red). Scale ab-example-a to 1 replica and ab-example-b to 0 : USD oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0 Refresh your browser to show v1 and shard A (in blue). If you trigger a deployment on either shard, only the pods in that shard are affected. You can trigger a deployment by changing the SUBTITLE environment variable in either Deployment object: USD oc edit dc/ab-example-a or USD oc edit dc/ab-example-b | [
"apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always",
"apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend-1 labels: tier: frontend spec: replicas: 3 selector: 1 matchLabels: 2 tier: frontend matchExpressions: 3 - {key: tier, operator: In, values: [frontend]} template: metadata: labels: tier: frontend spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always",
"apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: frontend spec: replicas: 5 selector: name: frontend template: { ... } triggers: - type: ConfigChange 1 - imageChangeParams: automatic: true containerNames: - helloworld from: kind: ImageStreamTag name: hello-openshift:latest type: ImageChange 2 strategy: type: Rolling 3",
"apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: openshift/hello-openshift:latest ports: - containerPort: 80",
"oc rollout pause deployments/<name>",
"oc rollout latest dc/<name>",
"oc rollout history dc/<name>",
"oc rollout history dc/<name> --revision=1",
"oc describe dc <name>",
"oc rollout retry dc/<name>",
"oc rollout undo dc/<name>",
"oc set triggers dc/<name> --auto",
"spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>'",
"spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar",
"oc logs -f dc/<name>",
"oc logs --version=1 dc/<name>",
"triggers: - type: \"ConfigChange\"",
"triggers: - type: \"ImageChange\" imageChangeParams: automatic: true 1 from: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" namespace: \"myproject\" containerNames: - \"helloworld\"",
"oc set triggers dc/<dc_name> --from-image=<project>/<image>:<tag> -c <container_name>",
"type: \"Recreate\" resources: limits: cpu: \"100m\" 1 memory: \"256Mi\" 2 ephemeral-storage: \"1Gi\" 3",
"type: \"Recreate\" resources: requests: 1 cpu: \"100m\" memory: \"256Mi\" ephemeral-storage: \"1Gi\"",
"oc scale dc frontend --replicas=3",
"apiVersion: v1 kind: Pod spec: nodeSelector: disktype: ssd",
"oc edit dc/<deployment_config>",
"spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account>",
"strategy: type: Rolling rollingParams: updatePeriodSeconds: 1 1 intervalSeconds: 1 2 timeoutSeconds: 120 3 maxSurge: \"20%\" 4 maxUnavailable: \"10%\" 5 pre: {} 6 post: {}",
"oc new-app quay.io/openshifttest/deployment-example:latest",
"oc expose svc/deployment-example",
"oc scale dc/deployment-example --replicas=3",
"oc tag deployment-example:v2 deployment-example:latest",
"oc describe dc deployment-example",
"strategy: type: Recreate recreateParams: 1 pre: {} 2 mid: {} post: {}",
"strategy: type: Custom customParams: image: organization/strategy command: [ \"command\", \"arg1\" ] environment: - name: ENV_1 value: VALUE_1",
"strategy: type: Rolling customParams: command: - /bin/sh - -c - | set -e openshift-deploy --until=50% echo Halfway there openshift-deploy echo Complete",
"Started deployment #2 --> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-2 up to 1 --> Reached 50% (currently 50%) Halfway there --> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-1 down to 1 Scaling custom-deployment-2 up to 2 Scaling custom-deployment-1 down to 0 --> Success Complete",
"pre: failurePolicy: Abort execNewPod: {} 1",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: template: metadata: labels: name: frontend spec: containers: - name: helloworld image: openshift/origin-ruby-sample replicas: 5 selector: name: frontend strategy: type: Rolling rollingParams: pre: failurePolicy: Abort execNewPod: containerName: helloworld 1 command: [ \"/usr/bin/command\", \"arg1\", \"arg2\" ] 2 env: 3 - name: CUSTOM_VAR1 value: custom_value1 volumes: - data 4",
"oc set deployment-hook dc/frontend --pre -c helloworld -e CUSTOM_VAR1=custom_value1 --volumes data --failure-policy=abort -- /usr/bin/command arg1 arg2",
"oc new-app openshift/deployment-example:v1 --name=example-blue",
"oc new-app openshift/deployment-example:v2 --name=example-green",
"oc expose svc/example-blue --name=bluegreen-example",
"oc patch route/bluegreen-example -p '{\"spec\":{\"to\":{\"name\":\"example-green\"}}}'",
"oc new-app openshift/deployment-example --name=ab-example-a",
"oc new-app openshift/deployment-example:v2 --name=ab-example-b",
"oc expose svc/ab-example-a",
"oc edit route <route_name>",
"metadata: name: route-alternate-service annotations: haproxy.router.openshift.io/balance: roundrobin spec: host: ab-example.my-project.my-domain to: kind: Service name: ab-example-a weight: 10 alternateBackends: - kind: Service name: ab-example-b weight: 15",
"oc set route-backends ROUTENAME [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options]",
"oc set route-backends ab-example ab-example-a=198 ab-example-b=2",
"oc set route-backends ab-example",
"NAME KIND TO WEIGHT routes/ab-example Service ab-example-a 198 (99%) routes/ab-example Service ab-example-b 2 (1%)",
"oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10",
"oc set route-backends ab-example --adjust ab-example-b=5%",
"oc set route-backends ab-example --adjust ab-example-b=+15%",
"oc set route-backends ab-example --equal",
"oc new-app openshift/deployment-example --name=ab-example-a --as-deployment-config=true --labels=ab-example=true --env=SUBTITLE\\=shardA oc delete svc/ab-example-a",
"oc expose deployment ab-example-a --name=ab-example --selector=ab-example\\=true oc expose service ab-example",
"oc new-app openshift/deployment-example:v2 --name=ab-example-b --labels=ab-example=true SUBTITLE=\"shard B\" COLOR=\"red\" --as-deployment-config=true oc delete svc/ab-example-b",
"oc scale dc/ab-example-a --replicas=0",
"oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0",
"oc edit dc/ab-example-a",
"oc edit dc/ab-example-b"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/building_applications/deployments |
19.3. Mail Transport Agents | 19.3. Mail Transport Agents Red Hat Enterprise Linux offers two primary MTAs: Postfix and Sendmail. Postfix is configured as the default MTA, although it is easy to switch the default MTA to Sendmail. To switch the default MTA to Sendmail, you can either uninstall Postfix or use the following command to switch to Sendmail: You can also use a command in the following format to enable or disable the desired service: chkconfig service_name on | off 19.3.1. Postfix Originally developed at IBM by security expert and programmer Wietse Venema, Postfix is a Sendmail-compatible MTA that is designed to be secure, fast, and easy to configure. To improve security, Postfix uses a modular design, where small processes with limited privileges are launched by a master daemon. The smaller, less privileged processes perform very specific tasks related to the various stages of mail delivery and run in a changed root environment to limit the effects of attacks. Configuring Postfix to accept network connections from hosts other than the local computer takes only a few minor changes in its configuration file. Yet for those with more complex needs, Postfix provides a variety of configuration options, as well as third party add-ons that make it a very versatile and full-featured MTA. The configuration files for Postfix are human readable and support upward of 250 directives. Unlike Sendmail, no macro processing is required for changes to take effect and the majority of the most commonly used options are described in the heavily commented files. 19.3.1.1. The Default Postfix Installation The Postfix executable is /usr/sbin/postfix . This daemon launches all related processes needed to handle mail delivery. Postfix stores its configuration files in the /etc/postfix/ directory. The following is a list of the more commonly used files: access - Used for access control, this file specifies which hosts are allowed to connect to Postfix. main.cf - The global Postfix configuration file. The majority of configuration options are specified in this file. master.cf - Specifies how Postfix interacts with various processes to accomplish mail delivery. transport - Maps email addresses to relay hosts. The aliases file can be found in the /etc/ directory. This file is shared between Postfix and Sendmail. It is a configurable list required by the mail protocol that describes user ID aliases. Important The default /etc/postfix/main.cf file does not allow Postfix to accept network connections from a host other than the local computer. For instructions on configuring Postfix as a server for other clients, see Section 19.3.1.2, "Basic Postfix Configuration" . Restart the postfix service after changing any options in the configuration files under the /etc/postfix directory in order for those changes to take effect: 19.3.1.2. Basic Postfix Configuration By default, Postfix does not accept network connections from any host other than the local host. Perform the following steps as root to enable mail delivery for other hosts on the network: Edit the /etc/postfix/main.cf file with a text editor, such as vi . Uncomment the mydomain line by removing the hash sign ( # ), and replace domain.tld with the domain the mail server is servicing, such as example.com . Uncomment the myorigin = USDmydomain line. Uncomment the myhostname line, and replace host.domain.tld with the host name for the machine. Uncomment the mydestination = USDmyhostname, localhost.USDmydomain line. Uncomment the mynetworks line, and replace 168.100.189.0/28 with a valid network setting for hosts that can connect to the server. Uncomment the inet_interfaces = all line. Comment the inet_interfaces = localhost line. Restart the postfix service. Once these steps are complete, the host accepts outside emails for delivery. Postfix has a large assortment of configuration options. One of the best ways to learn how to configure Postfix is to read the comments within the /etc/postfix/main.cf configuration file. Additional resources including information about Postfix configuration, SpamAssassin integration, or detailed descriptions of the /etc/postfix/main.cf parameters are available online at http://www.postfix.org/ . 19.3.1.2.1. Configuring Postfix to Use Transport Layer Security Configuring postfix to use transport layer security ( TLS ) is described in the Red Hat Knowledgebase solution How to configure postfix with TLS? Important Due to the vulnerability described in Resolution for POODLE SSL 3.0 vulnerability (CVE-2014-3566) in Postfix and Dovecot , Red Hat recommends disabling SSL , if it is enabled, and using only TLSv1.1 or TLSv1.2 . Backwards compatibility can be achieved using TLSv1.0 . Many products Red Hat supports have the ability to use SSLv2 or SSLv3 protocols. However, the use of SSLv2 or SSLv3 is now strongly recommended against. 19.3.1.3. Using Postfix with LDAP Postfix can use an LDAP directory as a source for various lookup tables (e.g.: aliases , virtual , canonical , etc.). This allows LDAP to store hierarchical user information and Postfix to only be given the result of LDAP queries when needed. By not storing this information locally, administrators can easily maintain it. 19.3.1.3.1. The /etc/aliases lookup example The following is a basic example for using LDAP to look up the /etc/aliases file. Make sure your /etc/postfix/main.cf file contains the following: Create a /etc/postfix/ldap-aliases.cf file if you do not have one already and make sure it contains the following: where ldap.example.com , example , and com are parameters that need to be replaced with specification of an existing available LDAP server. Note The /etc/postfix/ldap-aliases.cf file can specify various parameters, including parameters that enable LDAP SSL and STARTTLS . For more information, see the ldap_table(5) man page. For more information on LDAP , see Section 20.1, "OpenLDAP" . | [
"~]# alternatives --config mta",
"~]# service postfix restart",
"alias_maps = hash:/etc/aliases, ldap:/etc/postfix/ldap-aliases.cf",
"server_host = ldap.example.com search_base = dc= example , dc= com"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-email-mta |
3.5. Growing a GFS2 File System | 3.5. Growing a GFS2 File System The gfs2_grow command is used to expand a GFS2 file system after the device where the file system resides has been expanded. Running the gfs2_grow command on an existing GFS2 file system fills all spare space between the current end of the file system and the end of the device with a newly initialized GFS2 file system extension. When the fill operation is completed, the resource index for the file system is updated. All nodes in the cluster can then use the extra storage space that has been added. The gfs2_grow command must be run on a mounted file system, but only needs to be run on one node in a cluster. All the other nodes sense that the expansion has occurred and automatically start using the new space. Note Once you have created a GFS2 file system with the mkfs.gfs2 command, you cannot decrease the size of the file system. Usage MountPoint Specifies the GFS2 file system to which the actions apply. Comments Before running the gfs2_grow command: Back up important data on the file system. Determine the volume that is used by the file system to be expanded by running the df MountPoint command. Expand the underlying cluster volume with LVM. For information on administering LVM volumes, see Logical Volume Manager Administration . After running the gfs2_grow command, run the df command to check that the new space is now available in the file system. Examples In this example, the file system on the /mygfs2fs directory is expanded. Complete Usage MountPoint Specifies the directory where the GFS2 file system is mounted. Device Specifies the device node of the file system. Table 3.3, "GFS2-specific Options Available While Expanding A File System" describes the GFS2-specific options that can be used while expanding a GFS2 file system. Table 3.3. GFS2-specific Options Available While Expanding A File System Option Description -h Help. Displays a short usage message. -q Quiet. Turns down the verbosity level. -r Megabytes Specifies the size of the new resource group. The default size is 256 megabytes. -T Test. Do all calculations, but do not write any data to the disk and do not expand the file system. -V Displays command version information. | [
"gfs2_grow MountPoint",
"gfs2_grow /mygfs2fs FS: Mount Point: /mygfs2fs FS: Device: /dev/mapper/gfs2testvg-gfs2testlv FS: Size: 524288 (0x80000) FS: RG size: 65533 (0xfffd) DEV: Size: 655360 (0xa0000) The file system grew by 512MB. gfs2_grow complete.",
"gfs2_grow [ Options ] { MountPoint | Device } [ MountPoint | Device ]"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/global_file_system_2/s1-manage-growfs |
Chapter 1. Getting started with AMQ Interconnect on OpenShift Container Platform | Chapter 1. Getting started with AMQ Interconnect on OpenShift Container Platform AMQ Interconnect is a lightweight AMQP 1.0 message router for building large, highly resilient messaging networks for hybrid cloud and IoT/edge deployments. AMQ Interconnect automatically learns the addresses of messaging endpoints (such as clients, servers, and message brokers) and flexibly routes messages between them. This document describes how to deploy AMQ Interconnect on OpenShift Container Platform by using the AMQ Interconnect Operator and the Interconnect Custom Resource Definition (CRD) that it provides. The CRD defines an AMQ Interconnect deployment, and the Operator creates and manages the deployment in OpenShift Container Platform. 1.1. What Operators are Operators are a method of packaging, deploying, and managing a Kubernetes application. They take human operational knowledge and encode it into software that is more easily shared with consumers to automate common or complex tasks. In OpenShift Container Platform 4.0, the Operator Lifecycle Manager (OLM) helps users install, update, and generally manage the life cycle of all Operators and their associated services running across their clusters. It is part of the Operator Framework, an open source toolkit designed to manage Kubernetes native applications (Operators) in an effective, automated, and scalable way. The OLM runs by default in OpenShift Container Platform 4.0, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster. OperatorHub is the graphical interface that OpenShift Container Platform cluster administrators use to discover, install, and upgrade Operators. With one click, these Operators can be pulled from OperatorHub, installed on the cluster, and managed by the OLM, ready for engineering teams to self-service manage the software in development, test, and production environments. Additional resources For more information about Operators, see the OpenShift documentation . 1.2. Provided Custom Resources The AMQ Interconnect Operator provides the Interconnect Custom Resource Definition (CRD), which allows you to interact with an AMQ Interconnect deployment running on OpenShift Container Platform just like other OpenShift Container Platform API objects. The Interconnect CRD represents a deployment of AMQ Interconnect routers. The CRD provides elements for defining many different aspects of a router deployment in OpenShift Container Platform such as: Number of AMQ Interconnect routers Deployment topology Connectivity Address semantics | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/deploying_amq_interconnect_on_openshift/getting-started-router-openshift-router-ocp |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/snip-conscious-language_developing-process-services |
Preface | Preface Providing feedback on Red Hat documentation You can give feedback or report an error in the documentation by creating a Jira issue. You must have a Red Hat Jira account. Log in to Jira . Click Create Issue to launch the form in a browser. Complete the Summary , Description , and Reporter fields. Click Create to submit the form. The form creates an issue in the Red Hat Hybrid Cloud Infrastructure (HCIDOCS) Jira project. | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_openshift_container_platform_with_the_assisted_installer/pr01 |
Operators | Operators OpenShift Container Platform 4.10 Working with Operators in OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"etcd ├── manifests │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml │ └── secret.yaml │ └── configmap.yaml └── metadata └── annotations.yaml └── dependencies.yaml",
"annotations: operators.operatorframework.io.bundle.mediatype.v1: \"registry+v1\" 1 operators.operatorframework.io.bundle.manifests.v1: \"manifests/\" 2 operators.operatorframework.io.bundle.metadata.v1: \"metadata/\" 3 operators.operatorframework.io.bundle.package.v1: \"test-operator\" 4 operators.operatorframework.io.bundle.channels.v1: \"beta,stable\" 5 operators.operatorframework.io.bundle.channel.default.v1: \"stable\" 6",
"dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"Ignore everything except non-object .json and .yaml files **/* !*.json !*.yaml **/objects/*.json **/objects/*.yaml",
"catalog ├── packageA │ └── index.yaml ├── packageB │ ├── .indexignore │ ├── index.yaml │ └── objects │ └── packageB.v0.1.0.clusterserviceversion.yaml └── packageC └── index.json",
"_Meta: { // schema is required and must be a non-empty string schema: string & !=\"\" // package is optional, but if it's defined, it must be a non-empty string package?: string & !=\"\" // properties is optional, but if it's defined, it must be a list of 0 or more properties properties?: [... #Property] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null }",
"#Package: { schema: \"olm.package\" // Package name name: string & !=\"\" // A description of the package description?: string // The package's default channel defaultChannel: string & !=\"\" // An optional icon icon?: { base64data: string mediatype: string } }",
"#Channel: { schema: \"olm.channel\" package: string & !=\"\" name: string & !=\"\" entries: [...#ChannelEntry] } #ChannelEntry: { // name is required. It is the name of an `olm.bundle` that // is present in the channel. name: string & !=\"\" // replaces is optional. It is the name of bundle that is replaced // by this entry. It does not have to be present in the entry list. replaces?: string & !=\"\" // skips is optional. It is a list of bundle names that are skipped by // this entry. The skipped bundles do not have to be present in the // entry list. skips?: [...string & !=\"\"] // skipRange is optional. It is the semver range of bundle versions // that are skipped by this entry. skipRange?: string & !=\"\" }",
"#Bundle: { schema: \"olm.bundle\" package: string & !=\"\" name: string & !=\"\" image: string & !=\"\" properties: [...#Property] relatedImages?: [...#RelatedImage] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null } #RelatedImage: { // image is the image reference image: string & !=\"\" // name is an optional descriptive name for an image that // helps identify its purpose in the context of the bundle name?: string & !=\"\" }",
"#PropertyPackage: { type: \"olm.package\" value: { packageName: string & !=\"\" version: string & !=\"\" } }",
"#PropertyGVK: { type: \"olm.gvk\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }",
"#PropertyPackageRequired: { type: \"olm.package.required\" value: { packageName: string & !=\"\" versionRange: string & !=\"\" } }",
"#PropertyGVKRequired: { type: \"olm.gvk.required\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }",
"name: community-operators repo: quay.io/community-operators/catalog tag: latest references: - name: etcd-operator image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 - name: prometheus-operator image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317",
"name=USD(yq eval '.name' catalog.yaml) mkdir \"USDname\" yq eval '.name + \"/\" + .references[].name' catalog.yaml | xargs mkdir for l in USD(yq e '.name as USDcatalog | .references[] | .image + \"|\" + USDcatalog + \"/\" + .name + \"/index.yaml\"' catalog.yaml); do image=USD(echo USDl | cut -d'|' -f1) file=USD(echo USDl | cut -d'|' -f2) opm render \"USDimage\" > \"USDfile\" done opm alpha generate dockerfile \"USDname\" indexImage=USD(yq eval '.repo + \":\" + .tag' catalog.yaml) docker build -t \"USDindexImage\" -f \"USDname.Dockerfile\" . docker push \"USDindexImage\"",
"\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog 1 namespace: openshift-marketplace 2 annotations: olm.catalogImageTemplate: 3 \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}\" spec: displayName: Example Catalog 4 image: quay.io/example-org/example-catalog:v1 5 priority: -400 6 publisher: Example Org sourceType: grpc 7 grpcPodConfig: nodeSelector: 8 custom_label: <label> priorityClassName: system-cluster-critical 9 tolerations: 10 - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" updateStrategy: registryPoll: 11 interval: 30m0s status: connectionState: address: example-catalog.openshift-marketplace.svc:50051 lastConnect: 2021-08-26T18:14:31Z lastObservedState: READY 12 latestImageRegistryPoll: 2021-08-26T18:46:25Z 13 registryService: 14 createdAt: 2021-08-26T16:16:37Z port: 50051 protocol: grpc serviceName: example-catalog serviceNamespace: openshift-marketplace",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace",
"registry.redhat.io/redhat/redhat-operator-index:v4.9",
"registry.redhat.io/redhat/redhat-operator-index:v4.10",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog namespace: openshift-marketplace annotations: olm.catalogImageTemplate: \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}\" spec: displayName: Example Catalog image: quay.io/example-org/example-catalog:v1.23 priority: -400 publisher: Example Org",
"quay.io/example-org/example-catalog:v1.23",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace",
"apiVersion: operators.coreos.com/v1alpha1 kind: InstallPlan metadata: name: install-abcde namespace: operators spec: approval: Automatic approved: true clusterServiceVersionNames: - my-operator.v1.0.1 generation: 1 status: catalogSources: [] conditions: - lastTransitionTime: '2021-01-01T20:17:27Z' lastUpdateTime: '2021-01-01T20:17:27Z' status: 'True' type: Installed phase: Complete plan: - resolving: my-operator.v1.0.1 resource: group: operators.coreos.com kind: ClusterServiceVersion manifest: >- name: my-operator.v1.0.1 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1alpha1 status: Created - resolving: my-operator.v1.0.1 resource: group: apiextensions.k8s.io kind: CustomResourceDefinition manifest: >- name: webservers.web.servers.org sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1beta1 status: Created - resolving: my-operator.v1.0.1 resource: group: '' kind: ServiceAccount manifest: >- name: my-operator sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: Role manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: RoleBinding manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created",
"packageName: example channels: - name: alpha currentCSV: example.v0.1.2 - name: beta currentCSV: example.v0.1.3 defaultChannel: alpha",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: etcdoperator.v0.9.2 namespace: placeholder annotations: spec: displayName: etcd description: Etcd Operator replaces: etcdoperator.v0.9.0 skips: - etcdoperator.v0.9.1",
"olm.skipRange: <semver_range>",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: elasticsearch-operator.v4.1.2 namespace: <namespace> annotations: olm.skipRange: '>=4.1.0 <4.1.2'",
"properties: - type: olm.kubeversion value: version: \"1.16.0\"",
"properties: - property: type: color value: red - property: type: shape value: square - property: type: olm.gvk value: group: olm.coreos.io version: v1alpha1 kind: myresource",
"dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"type: olm.constraint value: failureMessage: 'require to have \"certified\"' cel: rule: 'properties.exists(p, p.type == \"certified\")'",
"type: olm.constraint value: failureMessage: 'require to have \"certified\" and \"stable\" properties' cel: rule: 'properties.exists(p, p.type == \"certified\") && properties.exists(p, p.type == \"stable\")'",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: All are required for Red because all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: GVK Green/v1 is needed for gvk: group: greens.example.com version: v1 kind: Green",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Any are required for Red because any: constraints: - gvk: group: blues.example.com version: v1beta1 kind: Blue - gvk: group: blues.example.com version: v1beta2 kind: Blue - gvk: group: blues.example.com version: v1 kind: Blue",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: Cannot be required for Red because not: constraints: - gvk: group: greens.example.com version: v1alpha1 kind: greens",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Required for Red because any: constraints: - all: constraints: - package: name: blue versionRange: '>=1.0.0' - gvk: group: blues.example.com version: v1 kind: Blue - all: constraints: - package: name: blue versionRange: '<1.0.0' - gvk: group: blues.example.com version: v1beta1 kind: Blue",
"apiVersion: \"operators.coreos.com/v1alpha1\" kind: \"CatalogSource\" metadata: name: \"my-operators\" namespace: \"operators\" spec: sourceType: grpc image: example.com/my/operator-index:v1 displayName: \"My Operators\" priority: 100",
"dependencies: - type: olm.package value: packageName: etcd version: \">3.1.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: targetNamespaces: - my-namespace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: selector: cool.io/prod: \"true\"",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: PackageManifest.v1alpha1.packages.apps.redhat.com name: olm-operators namespace: local spec: selector: {} serviceAccount: metadata: creationTimestamp: null targetNamespaces: - local status: lastUpdated: 2019-02-19T16:18:28Z namespaces: - local",
"cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: false EOF",
"cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true EOF",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-monitoring namespace: cluster-monitoring annotations: olm.providedAPIs: Alertmanager.v1.monitoring.coreos.com,Prometheus.v1.monitoring.coreos.com,PrometheusRule.v1.monitoring.coreos.com,ServiceMonitor.v1.monitoring.coreos.com spec: staticProvidedAPIs: true selector: matchLabels: something.cool.io/cluster-monitoring: \"true\"",
"attenuated service account query failed - more than one operator group(s) are managing this namespace count=2",
"apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: conditions: - type: Upgradeable 1 status: \"False\" 2 reason: \"migration\" message: \"The Operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"",
"apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true 1 sources: [ 2 { name: \"community-operators\", disabled: false } ]",
"registry.redhat.io/redhat/redhat-operator-index:v4.8",
"registry.redhat.io/redhat/redhat-operator-index:v4.9",
"apiVersion: apiextensions.k8s.io/v1 1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com 2 spec: group: stable.example.com 3 versions: name: v1 4 scope: Namespaced 5 names: plural: crontabs 6 singular: crontab 7 kind: CronTab 8 shortNames: - ct 9",
"oc create -f <file_name>.yaml",
"/apis/<spec:group>/<spec:version>/<scope>/*/<names-plural>/",
"/apis/stable.example.com/v1/namespaces/*/crontabs/",
"kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 1 metadata: name: aggregate-cron-tabs-admin-edit 2 labels: rbac.authorization.k8s.io/aggregate-to-admin: \"true\" 3 rbac.authorization.k8s.io/aggregate-to-edit: \"true\" 4 rules: - apiGroups: [\"stable.example.com\"] 5 resources: [\"crontabs\"] 6 verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\", \"deletecollection\"] 7 --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: aggregate-cron-tabs-view 8 labels: # Add these permissions to the \"view\" default role. rbac.authorization.k8s.io/aggregate-to-view: \"true\" 9 rbac.authorization.k8s.io/aggregate-to-cluster-reader: \"true\" 10 rules: - apiGroups: [\"stable.example.com\"] 11 resources: [\"crontabs\"] 12 verbs: [\"get\", \"list\", \"watch\"] 13",
"oc create -f <file_name>.yaml",
"apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image",
"oc create -f <file_name>.yaml",
"oc get <kind>",
"oc get crontab",
"NAME KIND my-new-cron-object CronTab.v1.stable.example.com",
"oc get crontabs",
"oc get crontab",
"oc get ct",
"oc get <kind> -o yaml",
"oc get ct -o yaml",
"apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2",
"apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image",
"oc create -f <file_name>.yaml",
"oc get <kind>",
"oc get crontab",
"NAME KIND my-new-cron-object CronTab.v1.stable.example.com",
"oc get crontabs",
"oc get crontab",
"oc get ct",
"oc get <kind> -o yaml",
"oc get ct -o yaml",
"apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2",
"oc get csv",
"oc policy add-role-to-user edit <user> -n <target_project>",
"oc get packagemanifests -n openshift-marketplace",
"NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m",
"oc describe packagemanifests <operator_name> -n openshift-marketplace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar",
"oc apply -f sub.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay spec: channel: quay-v3.4 installPlanApproval: Manual 1 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: quay-operator.v3.4.0 2",
"oc apply -f sub.yaml",
"oc get packagemanifests -n openshift-marketplace",
"NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m",
"oc describe packagemanifests <operator_name> -n openshift-marketplace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar",
"oc apply -f sub.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay spec: channel: quay-v3.4 installPlanApproval: Manual 1 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: quay-operator.v3.4.0 2",
"oc apply -f sub.yaml",
"oc get subscription jaeger -n openshift-operators -o yaml | grep currentCSV",
"currentCSV: jaeger-operator.v1.8.2",
"oc delete subscription jaeger -n openshift-operators",
"subscription.operators.coreos.com \"jaeger\" deleted",
"oc delete clusterserviceversion jaeger-operator.v1.8.2 -n openshift-operators",
"clusterserviceversion.operators.coreos.com \"jaeger-operator.v1.8.2\" deleted",
"ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"",
"rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host",
"oc get sub,csv -n <namespace>",
"NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded",
"oc delete subscription <subscription_name> -n <namespace>",
"oc delete csv <csv_name> -n <namespace>",
"oc get job,configmap -n openshift-marketplace",
"NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s",
"oc delete job <job_name> -n openshift-marketplace",
"oc delete configmap <configmap_name> -n openshift-marketplace",
"oc get sub,csv,installplan -n <namespace>",
"oc apply -f - <<EOF apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true 1 EOF",
"oc get events",
"LAST SEEN TYPE REASON OBJECT MESSAGE 85s Warning DisabledCopiedCSVs clusterserviceversion/my-csv.v1.0.0 CSV copying disabled for operators/my-csv.v1.0.0",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd-config-test namespace: openshift-operators spec: config: env: - name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test channel: clusterwide-alpha installPlanApproval: Automatic name: etcd source: community-operators sourceNamespace: openshift-marketplace startingCSV: etcdoperator.v0.9.4-clusterwide",
"oc get deployment -n openshift-operators etcd-operator -o yaml | grep -i \"PROXY\" -A 2",
"- name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21088a98b93838e284a6086b13917f96b0d9c",
"apiVersion: v1 kind: ConfigMap metadata: name: trusted-ca 1 labels: config.openshift.io/inject-trusted-cabundle: \"true\" 2",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: my-operator spec: package: etcd channel: alpha config: 1 selector: matchLabels: <labels_for_pods> 2 volumes: 3 - name: trusted-ca configMap: name: trusted-ca items: - key: ca-bundle.crt 4 path: tls-ca-bundle.pem 5 volumeMounts: 6 - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true",
"oc get subs -n <operator_namespace>",
"oc describe sub <subscription_name> -n <operator_namespace>",
"Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy",
"oc get catalogsources -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m",
"oc describe catalogsource example-catalog -n openshift-marketplace",
"Name: example-catalog Namespace: openshift-marketplace Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m",
"oc describe pod example-catalog-bwt8z -n openshift-marketplace",
"Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull",
"oc edit operatorcondition <name>",
"apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: overrides: - type: Upgradeable 1 status: \"True\" reason: \"upgradeIsSafe\" message: \"This is a known issue with the Operator where it always reports that it cannot be upgraded.\" conditions: - type: Upgradeable status: \"False\" reason: \"migration\" message: \"The operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Namespace metadata: name: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: scoped namespace: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: scoped namespace: scoped rules: - apiGroups: [\"*\"] resources: [\"*\"] verbs: [\"*\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: scoped-bindings namespace: scoped roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: scoped subjects: - kind: ServiceAccount name: scoped namespace: scoped EOF",
"cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: scoped namespace: scoped spec: serviceAccountName: scoped targetNamespaces: - scoped EOF",
"cat <<EOF | oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd namespace: scoped spec: channel: singlenamespace-alpha name: etcd source: <catalog_source_name> 1 sourceNamespace: <catalog_source_namespace> 2 EOF",
"kind: Role rules: - apiGroups: [\"operators.coreos.com\"] resources: [\"subscriptions\", \"clusterserviceversions\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"\"] resources: [\"services\", \"serviceaccounts\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"rbac.authorization.k8s.io\"] resources: [\"roles\", \"rolebindings\"] verbs: [\"get\", \"create\", \"update\", \"patch\"] - apiGroups: [\"apps\"] 1 resources: [\"deployments\"] verbs: [\"list\", \"watch\", \"get\", \"create\", \"update\", \"patch\", \"delete\"] - apiGroups: [\"\"] 2 resources: [\"pods\"] verbs: [\"list\", \"watch\", \"get\", \"create\", \"update\", \"patch\", \"delete\"]",
"kind: ClusterRole 1 rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"get\"] --- kind: Role rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"create\", \"update\", \"patch\"]",
"apiVersion: operators.coreos.com/v1 kind: Subscription metadata: name: etcd namespace: scoped status: installPlanRef: apiVersion: operators.coreos.com/v1 kind: InstallPlan name: install-4plp8 namespace: scoped resourceVersion: \"117359\" uid: 2c1df80e-afea-11e9-bce3-5254009c9c23",
"apiVersion: operators.coreos.com/v1 kind: InstallPlan status: conditions: - lastTransitionTime: \"2019-07-26T21:13:10Z\" lastUpdateTime: \"2019-07-26T21:13:10Z\" message: 'error creating clusterrole etcdoperator.v0.9.4-clusterwide-dsfx4: clusterroles.rbac.authorization.k8s.io is forbidden: User \"system:serviceaccount:scoped:scoped\" cannot create resource \"clusterroles\" in API group \"rbac.authorization.k8s.io\" at the cluster scope' reason: InstallComponentFailed status: \"False\" type: Installed phase: Failed",
"mkdir <operator_name>-index",
"The base image is expected to contain /bin/opm (with a serve subcommand) and /bin/grpc_health_probe FROM registry.redhat.io/openshift4/ose-operator-registry:v4.9 Configure the entrypoint and command ENTRYPOINT [\"/bin/opm\"] CMD [\"serve\", \"/configs\"] Copy declarative config root into image at /configs ADD <operator_name>-index /configs Set DC-specific label for the location of the DC root directory in the image LABEL operators.operatorframework.io.index.configs.v1=/configs",
". ├── <operator_name>-index └── <operator_name>-index.Dockerfile",
"opm init <operator_name> \\ 1 --default-channel=preview \\ 2 --description=./README.md \\ 3 --icon=./operator-icon.svg \\ 4 --output yaml \\ 5 > <operator_name>-index/index.yaml 6",
"opm render <registry>/<namespace>/<bundle_image_name>:<tag> \\ 1 --output=yaml >> <operator_name>-index/index.yaml 2",
"--- schema: olm.channel package: <operator_name> name: preview entries: - name: <operator_name>.v0.1.0 1",
"opm validate <operator_name>-index",
"echo USD?",
"0",
"podman build . -f <operator_name>-index.Dockerfile -t <registry>/<namespace>/<catalog_image_name>:<tag>",
"podman login <registry>",
"podman push <registry>/<namespace>/<catalog_image_name>:<tag>",
"opm index add --bundles <registry>/<namespace>/<bundle_image_name>:<tag> \\ 1 --tag <registry>/<namespace>/<index_image_name>:<tag> \\ 2 [--binary-image <registry_base_image>] 3",
"podman login <registry>",
"podman push <registry>/<namespace>/<index_image_name>:<tag>",
"opm index add --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \\ 1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \\ 2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \\ 3 --pull-tool podman 4",
"opm index add --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 --from-index mirror.example.com/abc/abc-redhat-operator-index:4.10 --tag mirror.example.com/abc/abc-redhat-operator-index:4.10.1 --pull-tool podman",
"podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>",
"oc get packagemanifests -n openshift-marketplace",
"podman login <target_registry>",
"podman run -p50051:50051 -it registry.redhat.io/redhat/redhat-operator-index:v4.10",
"Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.10 Getting image source signatures Copying blob ae8a0c23f5b1 done INFO[0000] serving registry database=/database/index.db port=50051",
"grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out",
"{ \"name\": \"advanced-cluster-management\" } { \"name\": \"jaeger-product\" } { { \"name\": \"quay-operator\" }",
"opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4.10 \\ 1 -p advanced-cluster-management,jaeger-product,quay-operator \\ 2 [-i registry.redhat.io/openshift4/ose-operator-registry:v4.9] \\ 3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.10 4",
"podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.10",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace 1 annotations: olm.catalogImageTemplate: 2 \"<registry>/<namespace>/<index_image_name>:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}\" spec: sourceType: grpc image: <registry>/<namespace>/<index_image_name>:<tag> 3 displayName: My Operator Catalog publisher: <publisher_name> 4 updateStrategy: registryPoll: 5 interval: 30m",
"oc apply -f catalogSource.yaml",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h",
"oc get catalogsource -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s",
"oc get packagemanifest -n openshift-marketplace",
"NAME CATALOG AGE jaeger-product My Operator Catalog 93s",
"podman login <registry>:<port>",
"{ \"auths\": { \"registry.redhat.io\": { \"auth\": \"FrNHNydQXdzclNqdg==\" }, \"quay.io\": { \"auth\": \"fegdsRib21iMQ==\" }, \"https://quay.io/my-namespace/my-user/my-image\": { \"auth\": \"eWfjwsDdfsa221==\" }, \"https://quay.io/my-namespace/my-user\": { \"auth\": \"feFweDdscw34rR==\" }, \"https://quay.io/my-namespace\": { \"auth\": \"frwEews4fescyq==\" } } }",
"{ \"auths\": { \"registry.redhat.io\": { \"auth\": \"FrNHNydQXdzclNqdg==\" } } }",
"{ \"auths\": { \"quay.io\": { \"auth\": \"Xd2lhdsbnRib21iMQ==\" } } }",
"oc create secret generic <secret_name> -n openshift-marketplace --from-file=.dockerconfigjson=<path/to/registry/credentials> --type=kubernetes.io/dockerconfigjson",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace spec: sourceType: grpc secrets: 1 - \"<secret_name_1>\" - \"<secret_name_2>\" image: <registry>:<port>/<namespace>/<image>:<tag> displayName: My Operator Catalog publisher: <publisher_name> updateStrategy: registryPoll: interval: 30m",
"oc extract secret/pull-secret -n openshift-config --confirm",
"cat .dockerconfigjson | jq --compact-output '.auths[\"<registry>:<port>/<namespace>/\"] |= . + {\"auth\":\"<token>\"}' \\ 1 > new_dockerconfigjson",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=new_dockerconfigjson",
"oc create secret generic <secret_name> -n <tenant_namespace> --from-file=.dockerconfigjson=<path/to/registry/credentials> --type=kubernetes.io/dockerconfigjson",
"oc get sa -n <tenant_namespace> 1",
"NAME SECRETS AGE builder 2 6m1s default 2 6m1s deployer 2 6m1s etcd-operator 2 5m18s 1",
"oc secrets link <operator_sa> -n <tenant_namespace> <secret_name> --for=pull",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"podman login registry.redhat.io",
"podman login <target_registry>",
"podman run -p50051:50051 -it registry.redhat.io/redhat/redhat-operator-index:v4.10",
"Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4.10 Getting image source signatures Copying blob ae8a0c23f5b1 done INFO[0000] serving registry database=/database/index.db port=50051",
"grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out",
"{ \"name\": \"advanced-cluster-management\" } { \"name\": \"jaeger-product\" } { { \"name\": \"quay-operator\" }",
"opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4.10 \\ 1 -p advanced-cluster-management,jaeger-product,quay-operator \\ 2 [-i registry.redhat.io/openshift4/ose-operator-registry:v4.9] \\ 3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4.10 4",
"podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4.10",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc image: <registry>/<namespace>/redhat-operator-index:v4.10 3 displayName: My Operator Catalog publisher: <publisher_name> 4 updateStrategy: registryPoll: 5 interval: 30m",
"oc apply -f catalogSource.yaml",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h",
"oc get catalogsource -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s",
"oc get packagemanifest -n openshift-marketplace",
"NAME CATALOG AGE jaeger-product My Operator Catalog 93s",
"opm index add --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \\ 1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \\ 2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \\ 3 --pull-tool podman 4",
"opm index add --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 --from-index mirror.example.com/abc/abc-redhat-operator-index:4.10 --tag mirror.example.com/abc/abc-redhat-operator-index:4.10.1 --pull-tool podman",
"podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>",
"oc replace -f ./manifests-redhat-operator-index-<random_number>/imageContentSourcePolicy.yaml",
"oc get packagemanifests -n openshift-marketplace",
"grpcPodConfig: nodeSelector: custom_label: <label>",
"grpcPodConfig: priorityClassName: <priority_class>",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: example-catalog namespace: namespace: openshift-marketplace annotations: operatorframework.io/priorityclass: system-cluster-critical",
"grpcPodConfig: tolerations: - key: \"<key_name>\" operator: \"<operator_type>\" value: \"<value>\" effect: \"<effect>\"",
"tar xvf operator-sdk-v1.16.0-ocp-linux-x86_64.tar.gz",
"chmod +x operator-sdk",
"echo USDPATH",
"sudo mv ./operator-sdk /usr/local/bin/operator-sdk",
"operator-sdk version",
"operator-sdk version: \"v1.16.0-ocp\",",
"ports: - containerPort: 8443 + protocol: TCP name: https",
"resources: limits: - cpu: 100m - memory: 30Mi + cpu: 200m + memory: 100Mi",
"template: metadata: annotations: kubectl.kubernetes.io/default-container: manager",
"k8s.io/api v0.22.1 k8s.io/apimachinery v0.22.1 k8s.io/client-go v0.22.1 sigs.k8s.io/controller-runtime v0.10.0",
"go mod tidy",
"+ ENVTEST_K8S_VERSION = 1.22 test: manifests generate fmt vet envtest ## Run tests. - go test ./... -coverprofile cover.out + KUBEBUILDER_ASSETS=\"USD(shell USD(ENVTEST) use USD(ENVTEST_K8S_VERSION) -p path)\" go test ./... -coverprofile cover.out - USD(CONTROLLER_GEN) USD(CRD_OPTIONS) rbac:roleName=manager-role webhook paths=\"./...\" output:crd:artifacts:config=config/crd/bases + USD(CONTROLLER_GEN) rbac:roleName=manager-role crd webhook paths=\"./...\" output:crd:artifacts:config=config/crd/bases Produce CRDs that work back to Kubernetes 1.11 (no version conversion) - CRD_OPTIONS ?= \"crd:trivialVersions=true,preserveUnknownFields=false\" - admissionReviewVersions={v1,v1beta1} + admissionReviewVersions=v1 + ifndef ignore-not-found + ignore-not-found = false + endif ##@ Deployment - sh kubectl delete -f - + sh kubectl delete --ignore-not-found=USD(ignore-not-found) -f -",
"make manifest",
"- name: kubernetes.core version: \"2.2.0\"",
"- name: operator_sdk.util version: \"0.3.1\"",
"# TODO(user): Configure the resources accordingly based on the project requirements. # More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ resources: limits: cpu: 500m memory: 768Mi requests: cpu: 10m memory: 256Mi",
"ANSIBLE_ROLES_PATH=\"USD(ANSIBLE_ROLES_PATH):USD(shell pwd)/roles\" USD(ANSIBLE_OPERATOR) run",
"mkdir memcached-operator",
"cd memcached-operator",
"operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator",
"operator-sdk create api --resource=true --controller=true --group cache --version v1 --kind Memcached",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system",
"oc delete -f config/samples/cache_v1_memcached -n memcached-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"export GO111MODULE=on",
"operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator",
"domain: example.com layout: - go.kubebuilder.io/v3 projectName: memcached-operator repo: github.com/example-inc/memcached-operator version: \"3\" plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {}",
"mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})",
"mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: \"\"})",
"var namespaces []string 1 mgr, err := ctrl.NewManager(cfg, manager.Options{ 2 NewCache: cache.MultiNamespacedCacheBuilder(namespaces), })",
"operator-sdk edit --multigroup=true",
"domain: example.com layout: go.kubebuilder.io/v3 multigroup: true",
"operator-sdk create api --group=cache --version=v1 --kind=Memcached",
"Create Resource [y/n] y Create Controller [y/n] y",
"Writing scaffold for you to edit api/v1/memcached_types.go controllers/memcached_controller.go",
"// MemcachedSpec defines the desired state of Memcached type MemcachedSpec struct { // +kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:\"size\"` } // MemcachedStatus defines the observed state of Memcached type MemcachedStatus struct { // Nodes are the names of the memcached pods Nodes []string `json:\"nodes\"` }",
"make generate",
"make manifests",
"/* Copyright 2020. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package controllers import ( appsv1 \"k8s.io/api/apps/v1\" corev1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/types\" \"reflect\" \"context\" \"github.com/go-logr/logr\" \"k8s.io/apimachinery/pkg/runtime\" ctrl \"sigs.k8s.io/controller-runtime\" \"sigs.k8s.io/controller-runtime/pkg/client\" ctrllog \"sigs.k8s.io/controller-runtime/pkg/log\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) // MemcachedReconciler reconciles a Memcached object type MemcachedReconciler struct { client.Client Log logr.Logger Scheme *runtime.Scheme } // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; // Reconcile is part of the main kubernetes reconciliation loop which aims to // move the current state of the cluster closer to the desired state. // TODO(user): Modify the Reconcile function to compare the state specified by // the Memcached object against the actual cluster state, and then // perform operations to make the cluster state reflect the state specified by // the user. // // For more details, check Reconcile and its Result here: // - https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/reconcile func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { //log := r.Log.WithValues(\"memcached\", req.NamespacedName) log := ctrllog.FromContext(ctx) // Fetch the Memcached instance memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) if err != nil { if errors.IsNotFound(err) { // Request object not found, could have been deleted after reconcile request. // Owned objects are automatically garbage collected. For additional cleanup logic use finalizers. // Return and don't requeue log.Info(\"Memcached resource not found. Ignoring since object must be deleted\") return ctrl.Result{}, nil } // Error reading the object - requeue the request. log.Error(err, \"Failed to get Memcached\") return ctrl.Result{}, err } // Check if the deployment already exists, if not create a new one found := &appsv1.Deployment{} err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found) if err != nil && errors.IsNotFound(err) { // Define a new deployment dep := r.deploymentForMemcached(memcached) log.Info(\"Creating a new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) err = r.Create(ctx, dep) if err != nil { log.Error(err, \"Failed to create new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) return ctrl.Result{}, err } // Deployment created successfully - return and requeue return ctrl.Result{Requeue: true}, nil } else if err != nil { log.Error(err, \"Failed to get Deployment\") return ctrl.Result{}, err } // Ensure the deployment size is the same as the spec size := memcached.Spec.Size if *found.Spec.Replicas != size { found.Spec.Replicas = &size err = r.Update(ctx, found) if err != nil { log.Error(err, \"Failed to update Deployment\", \"Deployment.Namespace\", found.Namespace, \"Deployment.Name\", found.Name) return ctrl.Result{}, err } // Spec updated - return and requeue return ctrl.Result{Requeue: true}, nil } // Update the Memcached status with the pod names // List the pods for this memcached's deployment podList := &corev1.PodList{} listOpts := []client.ListOption{ client.InNamespace(memcached.Namespace), client.MatchingLabels(labelsForMemcached(memcached.Name)), } if err = r.List(ctx, podList, listOpts...); err != nil { log.Error(err, \"Failed to list pods\", \"Memcached.Namespace\", memcached.Namespace, \"Memcached.Name\", memcached.Name) return ctrl.Result{}, err } podNames := getPodNames(podList.Items) // Update status.Nodes if needed if !reflect.DeepEqual(podNames, memcached.Status.Nodes) { memcached.Status.Nodes = podNames err := r.Status().Update(ctx, memcached) if err != nil { log.Error(err, \"Failed to update Memcached status\") return ctrl.Result{}, err } } return ctrl.Result{}, nil } // deploymentForMemcached returns a memcached Deployment object func (r *MemcachedReconciler) deploymentForMemcached(m *cachev1.Memcached) *appsv1.Deployment { ls := labelsForMemcached(m.Name) replicas := m.Spec.Size dep := &appsv1.Deployment{ ObjectMeta: metav1.ObjectMeta{ Name: m.Name, Namespace: m.Namespace, }, Spec: appsv1.DeploymentSpec{ Replicas: &replicas, Selector: &metav1.LabelSelector{ MatchLabels: ls, }, Template: corev1.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: ls, }, Spec: corev1.PodSpec{ Containers: []corev1.Container{{ Image: \"memcached:1.4.36-alpine\", Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{ ContainerPort: 11211, Name: \"memcached\", }}, }}, }, }, }, } // Set Memcached instance as the owner and controller ctrl.SetControllerReference(m, dep, r.Scheme) return dep } // labelsForMemcached returns the labels for selecting the resources // belonging to the given memcached CR name. func labelsForMemcached(name string) map[string]string { return map[string]string{\"app\": \"memcached\", \"memcached_cr\": name} } // getPodNames returns the pod names of the array of pods passed in func getPodNames(pods []corev1.Pod) []string { var podNames []string for _, pod := range pods { podNames = append(podNames, pod.Name) } return podNames } // SetupWithManager sets up the controller with the Manager. func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }",
"import ( appsv1 \"k8s.io/api/apps/v1\" ) func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }",
"func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). WithOptions(controller.Options{ MaxConcurrentReconciles: 2, }). Complete(r) }",
"import ( ctrl \"sigs.k8s.io/controller-runtime\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Lookup the Memcached instance for this reconcile request memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) }",
"// Reconcile successful - don't requeue return ctrl.Result{}, nil // Reconcile failed due to error - requeue return ctrl.Result{}, err // Requeue for any reason other than an error return ctrl.Result{Requeue: true}, nil",
"import \"time\" // Reconcile for any reason other than an error after 5 seconds return ctrl.Result{RequeueAfter: time.Second*5}, nil",
"// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { }",
"import ( \"github.com/operator-framework/operator-lib/proxy\" )",
"for i, container := range dep.Spec.Template.Spec.Containers { dep.Spec.Template.Spec.Containers[i].Env = append(container.Env, proxy.ReadProxyVarsFromEnv()...) }",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make install run",
"2021-01-10T21:09:29.016-0700 INFO controller-runtime.metrics metrics server is starting to listen {\"addr\": \":8080\"} 2021-01-10T21:09:29.017-0700 INFO setup starting manager 2021-01-10T21:09:29.017-0700 INFO controller-runtime.manager starting metrics server {\"path\": \"/metrics\"} 2021-01-10T21:09:29.018-0700 INFO controller-runtime.manager.controller.memcached Starting EventSource {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\", \"source\": \"kind source: /, Kind=\"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting Controller {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting workers {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\", \"worker count\": 1}",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle [-n <namespace>] \\ 1 <registry>/<user>/<bundle_image_name>:<tag>",
"oc project memcached-operator-system",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get memcached/memcached-sample -o yaml",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7",
"oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m",
"oc delete -f config/samples/cache_v1_memcached.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"mkdir memcached-operator",
"cd memcached-operator",
"operator-sdk init --plugins=ansible --domain=example.com",
"operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system",
"I0205 17:48:45.881666 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612547325.8819902,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612547325.98242,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612547325.9824686,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4} {\"level\":\"info\",\"ts\":1612547348.8311093,\"logger\":\"runner\",\"msg\":\"Ansible-runner exited successfully\",\"job\":\"4037200794235010051\",\"name\":\"memcached-sample\",\"namespace\":\"memcached-operator-system\"}",
"oc delete -f config/samples/cache_v1_memcached -n memcached-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"operator-sdk init --plugins=ansible --domain=example.com",
"domain: example.com layout: - ansible.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: memcached-operator version: \"3\"",
"operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1",
"--- - name: start memcached k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211",
"--- defaults file for Memcached size: 1",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: labels: app.kubernetes.io/name: memcached app.kubernetes.io/instance: memcached-sample app.kubernetes.io/part-of: memcached-operator app.kubernetes.io/managed-by: kustomize app.kubernetes.io/created-by: memcached-operator name: memcached-sample spec: size: 3",
"env: - name: HTTP_PROXY value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}' - name: http_proxy value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}'",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make install run",
"{\"level\":\"info\",\"ts\":1612589622.7888272,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612589622.7897573,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612589622.789971,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612589622.7899997,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612589622.8904517,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612589622.8905244,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle [-n <namespace>] \\ 1 <registry>/<user>/<bundle_image_name>:<tag>",
"oc project memcached-operator-system",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get memcached/memcached-sample -o yaml",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7",
"oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m",
"oc delete -f config/samples/cache_v1_memcached.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"",
"- version: v1alpha1 1 group: test1.example.com kind: Test1 role: /opt/ansible/roles/Test1 - version: v1alpha1 2 group: test2.example.com kind: Test2 playbook: /opt/ansible/playbook.yml - version: v1alpha1 3 group: test3.example.com kind: Test3 playbook: /opt/ansible/test3.yml reconcilePeriod: 0 manageStatus: false",
"- version: v1alpha1 group: app.example.com kind: AppService playbook: /opt/ansible/playbook.yml maxRunnerArtifacts: 30 reconcilePeriod: 5s manageStatus: False watchDependentResources: False",
"apiVersion: \"app.example.com/v1alpha1\" kind: \"Database\" metadata: name: \"example\" spec: message: \"Hello world 2\" newParameter: \"newParam\"",
"{ \"meta\": { \"name\": \"<cr_name>\", \"namespace\": \"<cr_namespace>\", }, \"message\": \"Hello world 2\", \"new_parameter\": \"newParam\", \"_app_example_com_database\": { <full_crd> }, }",
"--- - debug: msg: \"name: {{ ansible_operator_meta.name }}, {{ ansible_operator_meta.namespace }}\"",
"sudo dnf install ansible",
"pip3 install openshift",
"ansible-galaxy collection install community.kubernetes",
"ansible-galaxy collection install -r requirements.yml",
"--- - name: set ConfigMap example-config to {{ state }} community.kubernetes.k8s: api_version: v1 kind: ConfigMap name: example-config namespace: default 1 state: \"{{ state }}\" ignore_errors: true 2",
"--- state: present",
"--- - hosts: localhost roles: - <kind>",
"ansible-playbook playbook.yml",
"[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to present] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0",
"oc get configmaps",
"NAME DATA AGE example-config 0 2m1s",
"ansible-playbook playbook.yml --extra-vars state=absent",
"[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to absent] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0",
"oc get configmaps",
"apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"",
"make install",
"/usr/bin/kustomize build config/crd | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created",
"make run",
"/home/user/memcached-operator/bin/ansible-operator run {\"level\":\"info\",\"ts\":1612739145.2871568,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612739148.347306,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612739148.3488882,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612739148.3490262,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612739148.3490646,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612739148.350217,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612739148.3506632,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612739148.350784,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612739148.5511978,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612739148.5512562,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"apiVersion: <group>.example.com/v1alpha1 kind: <kind> metadata: name: \"<kind>-sample\"",
"oc apply -f config/samples/<gvk>.yaml",
"oc get configmaps",
"NAME STATUS AGE example-config Active 3s",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: state: absent",
"oc apply -f config/samples/<gvk>.yaml",
"oc get configmap",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"oc logs deployment/<project_name>-controller-manager -c manager \\ 1 -n <namespace> 2",
"{\"level\":\"info\",\"ts\":1612732105.0579333,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612732105.0587437,\"logger\":\"cmd\",\"msg\":\"WATCH_NAMESPACE environment variable not set. Watching all namespaces.\",\"Namespace\":\"\"} I0207 21:08:26.110949 7 request.go:645] Throttling request took 1.035521578s, request: GET:https://172.30.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1alpha1?timeout=32s {\"level\":\"info\",\"ts\":1612732107.768025,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\"127.0.0.1:8080\"} {\"level\":\"info\",\"ts\":1612732107.768796,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612732107.7688773,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612732107.7688901,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612732107.770032,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} I0207 21:08:27.770185 7 leaderelection.go:243] attempting to acquire leader lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.770202,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} I0207 21:08:27.784854 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.7850506,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612732107.8853772,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612732107.8854098,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4}",
"containers: - name: manager env: - name: ANSIBLE_DEBUG_LOGS value: \"True\"",
"apiVersion: \"cache.example.com/v1alpha1\" kind: \"Memcached\" metadata: name: \"example-memcached\" annotations: \"ansible.sdk.operatorframework.io/verbosity\": \"4\" spec: size: 4",
"status: conditions: - ansibleResult: changed: 3 completion: 2018-12-03T13:45:57.13329 failures: 1 ok: 6 skipped: 0 lastTransitionTime: 2018-12-03T13:45:57Z message: 'Status code was -1 and not [200]: Request failed: <urlopen error [Errno 113] No route to host>' reason: Failed status: \"True\" type: Failure - lastTransitionTime: 2018-12-03T13:46:13Z message: Running reconciliation reason: Running status: \"True\" type: Running",
"- version: v1 group: api.example.com kind: <kind> role: <role> manageStatus: false",
"- operator_sdk.util.k8s_status: api_version: app.example.com/v1 kind: <kind> name: \"{{ ansible_operator_meta.name }}\" namespace: \"{{ ansible_operator_meta.namespace }}\" status: test: data",
"collections: - operator_sdk.util",
"k8s_status: status: key1: value1",
"mkdir nginx-operator",
"cd nginx-operator",
"operator-sdk init --plugins=helm",
"operator-sdk create api --group demo --version v1 --kind Nginx",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample",
"oc apply -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system",
"oc logs deployment.apps/nginx-operator-controller-manager -c manager -n nginx-operator-system",
"oc delete -f config/samples/demo_v1_nginx -n nginx-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/nginx-operator",
"cd USDHOME/projects/nginx-operator",
"operator-sdk init --plugins=helm --domain=example.com --group=demo --version=v1 --kind=Nginx",
"operator-sdk init --plugins helm --help",
"domain: example.com layout: - helm.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: nginx-operator resources: - api: crdVersion: v1 namespaced: true domain: example.com group: demo kind: Nginx version: v1 version: \"3\"",
"Use the 'create api' subcommand to add watches to this file. - group: demo version: v1 kind: Nginx chart: helm-charts/nginx +kubebuilder:scaffold:watch",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2 service: port: 8080",
"- group: demo.example.com version: v1alpha1 kind: Nginx chart: helm-charts/nginx overrideValues: proxy.http: USDHTTP_PROXY",
"proxy: http: \"\" https: \"\" no_proxy: \"\"",
"containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}\" imagePullPolicy: {{ .Values.image.pullPolicy }} env: - name: http_proxy value: \"{{ .Values.proxy.http }}\"",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make install run",
"{\"level\":\"info\",\"ts\":1612652419.9289865,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612652419.9296563,\"logger\":\"helm.controller\",\"msg\":\"Watching resource\",\"apiVersion\":\"demo.example.com/v1\",\"kind\":\"Nginx\",\"namespace\":\"\",\"reconcilePeriod\":\"1m0s\"} {\"level\":\"info\",\"ts\":1612652419.929983,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612652419.930015,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: demo.example.com/v1, Kind=Nginx\"} {\"level\":\"info\",\"ts\":1612652420.2307851,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612652420.2309358,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle [-n <namespace>] \\ 1 <registry>/<user>/<bundle_image_name>:<tag>",
"oc project nginx-operator-system",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3",
"oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample",
"oc apply -f config/samples/demo_v1_nginx.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 8m nginx-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nginx-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m nginx-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m nginx-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get nginx/nginx-sample -o yaml",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3 status: nodes: - nginx-sample-6fd7c98d8-7dqdr - nginx-sample-6fd7c98d8-g5k7v - nginx-sample-6fd7c98d8-m7vn7",
"oc patch nginx nginx-sample -p '{\"spec\":{\"replicaCount\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 10m nginx-sample 5/5 5 5 3m",
"oc delete -f config/samples/demo_v1_nginx.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"apiVersion: apache.org/v1alpha1 kind: Tomcat metadata: name: example-app spec: replicaCount: 2",
"{{ .Values.replicaCount }}",
"oc get Tomcats --all-namespaces",
"mkdir -p USDHOME/github.com/example/memcached-operator",
"cd USDHOME/github.com/example/memcached-operator",
"operator-sdk init --plugins=hybrid.helm.sdk.operatorframework.io --project-version=\"3\" --domain example.com --repo=github.com/example/memcached-operator",
"operator-sdk create api --plugins helm.sdk.operatorframework.io/v1 --group cache --version v1 --kind Memcached",
"operator-sdk create api --plugins helm.sdk.operatorframework.io/v1 --help",
"Use the 'create api' subcommand to add watches to this file. - group: cache.my.domain version: v1 kind: Memcached chart: helm-charts/memcached #+kubebuilder:scaffold:watch",
"// Operator's main.go // With the help of helpers provided in the library, the reconciler can be // configured here before starting the controller with this reconciler. reconciler := reconciler.New( reconciler.WithChart(*chart), reconciler.WithGroupVersionKind(gvk), ) if err := reconciler.SetupWithManager(mgr); err != nil { panic(fmt.Sprintf(\"unable to create reconciler: %s\", err)) }",
"operator-sdk create api --group=cache --version v1 --kind MemcachedBackup --resource --controller --plugins=go/v3",
"Create Resource [y/n] y Create Controller [y/n] y",
"// MemcachedBackupSpec defines the desired state of MemcachedBackup type MemcachedBackupSpec struct { // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster // Important: Run \"make\" to regenerate code after modifying this file //+kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:\"size\"` } // MemcachedBackupStatus defines the observed state of MemcachedBackup type MemcachedBackupStatus struct { // INSERT ADDITIONAL STATUS FIELD - define observed state of cluster // Important: Run \"make\" to regenerate code after modifying this file // Nodes are the names of the memcached pods Nodes []string `json:\"nodes\"` }",
"make generate",
"make manifests",
"for _, w := range ws { // Register controller with the factory reconcilePeriod := defaultReconcilePeriod if w.ReconcilePeriod != nil { reconcilePeriod = w.ReconcilePeriod.Duration } maxConcurrentReconciles := defaultMaxConcurrentReconciles if w.MaxConcurrentReconciles != nil { maxConcurrentReconciles = *w.MaxConcurrentReconciles } r, err := reconciler.New( reconciler.WithChart(*w.Chart), reconciler.WithGroupVersionKind(w.GroupVersionKind), reconciler.WithOverrideValues(w.OverrideValues), reconciler.SkipDependentWatches(w.WatchDependentResources != nil && !*w.WatchDependentResources), reconciler.WithMaxConcurrentReconciles(maxConcurrentReconciles), reconciler.WithReconcilePeriod(reconcilePeriod), reconciler.WithInstallAnnotations(annotation.DefaultInstallAnnotations...), reconciler.WithUpgradeAnnotations(annotation.DefaultUpgradeAnnotations...), reconciler.WithUninstallAnnotations(annotation.DefaultUninstallAnnotations...), )",
"// Setup manager with Go API if err = (&controllers.MemcachedBackupReconciler{ Client: mgr.GetClient(), Scheme: mgr.GetScheme(), }).SetupWithManager(mgr); err != nil { setupLog.Error(err, \"unable to create controller\", \"controller\", \"MemcachedBackup\") os.Exit(1) } // Setup manager with Helm API for _, w := range ws { if err := r.SetupWithManager(mgr); err != nil { setupLog.Error(err, \"unable to create controller\", \"controller\", \"Helm\") os.Exit(1) } setupLog.Info(\"configured watch\", \"gvk\", w.GroupVersionKind, \"chartPath\", w.ChartPath, \"maxConcurrentReconciles\", maxConcurrentReconciles, \"reconcilePeriod\", reconcilePeriod) } // Start the manager if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil { setupLog.Error(err, \"problem running manager\") os.Exit(1) }",
"--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: manager-role rules: - apiGroups: - \"\" resources: - namespaces verbs: - get - apiGroups: - apps resources: - deployments - daemonsets - replicasets - statefulsets verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups/finalizers verbs: - create - delete - get - list - patch - update - watch - apiGroups: - \"\" resources: - pods - services - services/finalizers - endpoints - persistentvolumeclaims - events - configmaps - secrets - serviceaccounts verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups/status verbs: - get - patch - update - apiGroups: - policy resources: - events - poddisruptionbudgets verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcacheds - memcacheds/status - memcacheds/finalizers verbs: - create - delete - get - list - patch - update - watch",
"make install run",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"oc project <project_name>-system",
"apiVersion: cache.my.domain/v1 kind: Memcached metadata: name: memcached-sample spec: # Default values copied from <project_dir>/helm-charts/memcached/values.yaml affinity: {} autoscaling: enabled: false maxReplicas: 100 minReplicas: 1 targetCPUUtilizationPercentage: 80 fullnameOverride: \"\" image: pullPolicy: IfNotPresent repository: nginx tag: \"\" imagePullSecrets: [] ingress: annotations: {} className: \"\" enabled: false hosts: - host: chart-example.local paths: - path: / pathType: ImplementationSpecific tls: [] nameOverride: \"\" nodeSelector: {} podAnnotations: {} podSecurityContext: {} replicaCount: 3 resources: {} securityContext: {} service: port: 80 type: ClusterIP serviceAccount: annotations: {} create: true name: \"\" tolerations: []",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 18m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 18m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 18m",
"apiVersion: cache.my.domain/v1 kind: MemcachedBackup metadata: name: memcachedbackup-sample spec: size: 2",
"oc apply -f config/samples/cache_v1_memcachedbackup.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcachedbackup-sample-8649699989-4bbzg 1/1 Running 0 22m memcachedbackup-sample-8649699989-mq6mx 1/1 Running 0 22m",
"oc delete -f config/samples/cache_v1_memcached.yaml",
"oc delete -f config/samples/cache_v1_memcachedbackup.yaml",
"make undeploy",
"operators.openshift.io/infrastructure-features: '[\"disconnected\", \"proxy-aware\"]'",
"operators.openshift.io/valid-subscription: '[\"OpenShift Container Platform\"]'",
"operators.openshift.io/valid-subscription: '[\"3Scale Commercial License\", \"Red Hat Managed Integration\"]'",
"operators.openshift.io/infrastructure-features: '[\"disconnected\", \"proxy-aware\"]' operators.openshift.io/valid-subscription: '[\"OpenShift Container Platform\"]'",
"spec: spec: containers: - command: - /manager env: - name: <related_image_environment_variable> 1 value: \"<related_image_reference_with_tag>\" 2",
"// deploymentForMemcached returns a memcached Deployment object Spec: corev1.PodSpec{ Containers: []corev1.Container{{ - Image: \"memcached:1.4.36-alpine\", 1 + Image: os.Getenv(\"<related_image_environment_variable>\"), 2 Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{",
"spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v - image: \"docker.io/memcached:1.4.36-alpine\" 1 + image: \"{{ lookup('env', '<related_image_environment_variable>') }}\" 2 ports: - containerPort: 11211",
"- group: demo.example.com version: v1alpha1 kind: Memcached chart: helm-charts/memcached overrideValues: 1 relatedImage: USD{<related_image_environment_variable>} 2",
"relatedImage: \"\"",
"containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.pullPolicy }} env: 1 - name: related_image 2 value: \"{{ .Values.relatedImage }}\" 3",
"BUNDLE_GEN_FLAGS ?= -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) # USE_IMAGE_DIGESTS defines if images are resolved via tags or digests # You can enable this value if you would like to use SHA Based Digests # To enable set flag to true USE_IMAGE_DIGESTS ?= false ifeq (USD(USE_IMAGE_DIGESTS), true) BUNDLE_GEN_FLAGS += --use-image-digests endif - USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) 1 + USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle USD(BUNDLE_GEN_FLAGS) 2",
"make bundle USE_IMAGE_DIGESTS=true",
"metadata: annotations: operators.openshift.io/infrastructure-features: '[\"disconnected\"]'",
"labels: operatorframework.io/arch.<arch>: supported 1 operatorframework.io/os.<os>: supported 2",
"labels: operatorframework.io/os.linux: supported",
"labels: operatorframework.io/arch.amd64: supported",
"labels: operatorframework.io/arch.s390x: supported operatorframework.io/os.zos: supported operatorframework.io/os.linux: supported 1 operatorframework.io/arch.amd64: supported 2",
"metadata: annotations: operatorframework.io/suggested-namespace: <namespace> 1",
"module github.com/example-inc/memcached-operator go 1.15 require ( k8s.io/apimachinery v0.19.2 k8s.io/client-go v0.19.2 sigs.k8s.io/controller-runtime v0.7.0 operator-framework/operator-lib v0.3.0 )",
"import ( apiv1 \"github.com/operator-framework/api/pkg/operators/v1\" ) func NewUpgradeable(cl client.Client) (Condition, error) { return NewCondition(cl, \"apiv1.OperatorUpgradeable\") } cond, err := NewUpgradeable(cl);",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: webhook-operator.v0.0.1 spec: customresourcedefinitions: owned: - kind: WebhookTest name: webhooktests.webhook.operators.coreos.io 1 version: v1 install: spec: deployments: - name: webhook-operator-webhook strategy: deployment installModes: - supported: false type: OwnNamespace - supported: false type: SingleNamespace - supported: false type: MultiNamespace - supported: true type: AllNamespaces webhookdefinitions: - type: ValidatingAdmissionWebhook 2 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: vwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /validate-webhook-operators-coreos-io-v1-webhooktest - type: MutatingAdmissionWebhook 3 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: mwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /mutate-webhook-operators-coreos-io-v1-webhooktest - type: ConversionWebhook 4 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook generateName: cwebhooktest.kb.io sideEffects: None webhookPath: /convert conversionCRDs: - webhooktests.webhook.operators.coreos.io 5",
"- displayName: MongoDB Standalone group: mongodb.com kind: MongoDbStandalone name: mongodbstandalones.mongodb.com resources: - kind: Service name: '' version: v1 - kind: StatefulSet name: '' version: v1beta2 - kind: Pod name: '' version: v1 - kind: ConfigMap name: '' version: v1 specDescriptors: - description: Credentials for Ops Manager or Cloud Manager. displayName: Credentials path: credentials x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret' - description: Project this deployment belongs to. displayName: Project path: project x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap' - description: MongoDB version to be installed. displayName: Version path: version x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:label' statusDescriptors: - description: The status of each of the pods for the MongoDB cluster. displayName: Pod Status path: pods x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:podStatuses' version: v1 description: >- MongoDB Deployment consisting of only one host. No replication of data.",
"required: - name: etcdclusters.etcd.database.coreos.com version: v1beta2 kind: EtcdCluster displayName: etcd Cluster description: Represents a cluster of etcd nodes.",
"versions: - name: v1alpha1 served: true storage: false - name: v1beta1 1 served: true storage: true",
"customresourcedefinitions: owned: - name: cluster.example.com version: v1beta1 1 kind: cluster displayName: Cluster",
"versions: - name: v1alpha1 served: false 1 storage: true",
"versions: - name: v1alpha1 served: false storage: false 1 - name: v1beta1 served: true storage: true 2",
"versions: - name: v1beta1 served: true storage: true",
"metadata: annotations: alm-examples: >- [{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdCluster\",\"metadata\":{\"name\":\"example\",\"namespace\":\"default\"},\"spec\":{\"size\":3,\"version\":\"3.2.13\"}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdRestore\",\"metadata\":{\"name\":\"example-etcd-cluster\"},\"spec\":{\"etcdCluster\":{\"name\":\"example-etcd-cluster\"},\"backupStorageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdBackup\",\"metadata\":{\"name\":\"example-etcd-cluster-backup\"},\"spec\":{\"etcdEndpoints\":[\"<etcd-cluster-endpoints>\"],\"storageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}}]",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operators.operatorframework.io/internal-objects: '[\"my.internal.crd1.io\",\"my.internal.crd2.io\"]' 1",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operatorframework.io/initialization-resource: |- { \"apiVersion\": \"ocs.openshift.io/v1\", \"kind\": \"StorageCluster\", \"metadata\": { \"name\": \"example-storagecluster\" }, \"spec\": { \"manageNodes\": false, \"monPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"10Gi\" } }, \"storageClassName\": \"gp2\" } }, \"storageDeviceSets\": [ { \"count\": 3, \"dataPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"1Ti\" } }, \"storageClassName\": \"gp2\", \"volumeMode\": \"Block\" } }, \"name\": \"example-deviceset\", \"placement\": {}, \"portable\": true, \"resources\": {} } ] } }",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle [-n <namespace>] \\ 1 <registry>/<user>/<bundle_image_name>:<tag>",
"make catalog-build CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>",
"make catalog-push CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>",
"make bundle-build bundle-push catalog-build catalog-push BUNDLE_IMG=<bundle_image_pull_spec> CATALOG_IMG=<index_image_pull_spec>",
"IMAGE_TAG_BASE=quay.io/example/my-operator",
"make bundle-build bundle-push catalog-build catalog-push",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: cs-memcached namespace: default spec: displayName: My Test publisher: Company sourceType: grpc image: quay.io/example/memcached-catalog:v0.0.1 1 updateStrategy: registryPoll: interval: 10m",
"oc get catalogsource",
"NAME DISPLAY TYPE PUBLISHER AGE cs-memcached My Test grpc Company 4h31m",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-test namespace: default spec: targetNamespaces: - default",
"\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: catalogtest namespace: default spec: channel: \"alpha\" installPlanApproval: Manual name: catalog source: cs-memcached sourceNamespace: default startingCSV: memcached-operator.v0.0.1",
"oc get og",
"NAME AGE my-test 4h40m",
"oc get csv",
"NAME DISPLAY VERSION REPLACES PHASE memcached-operator.v0.0.1 Test 0.0.1 Succeeded",
"oc get pods",
"NAME READY STATUS RESTARTS AGE 9098d908802769fbde8bd45255e69710a9f8420a8f3d814abe88b68f8ervdj6 0/1 Completed 0 4h33m catalog-controller-manager-7fd5b7b987-69s4n 2/2 Running 0 4h32m cs-memcached-7622r 1/1 Running 0 4h33m",
"operator-sdk run bundle <registry>/<user>/memcached-operator:v0.0.1",
"INFO[0009] Successfully created registry pod: quay-io-demo-memcached-operator-v0-0-1 INFO[0009] Created CatalogSource: memcached-operator-catalog INFO[0010] OperatorGroup \"operator-sdk-og\" created INFO[0010] Created Subscription: memcached-operator-v0-0-1-sub INFO[0013] Approved InstallPlan install-bqggr for the Subscription: memcached-operator-v0-0-1-sub INFO[0013] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" to reach 'Succeeded' phase INFO[0013] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" to appear INFO[0019] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Succeeded",
"operator-sdk run bundle-upgrade <registry>/<user>/memcached-operator:v0.0.2",
"INFO[0002] Found existing subscription with name memcached-operator-v0-0-1-sub and namespace my-project INFO[0002] Found existing catalog source with name memcached-operator-catalog and namespace my-project INFO[0009] Successfully created registry pod: quay-io-demo-memcached-operator-v0-0-2 INFO[0009] Updated catalog source memcached-operator-catalog with address and annotations INFO[0010] Deleted previous registry pod with name \"quay-io-demo-memcached-operator-v0-0-1\" INFO[0041] Approved InstallPlan install-gvcjh for the Subscription: memcached-operator-v0-0-1-sub INFO[0042] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" to reach 'Succeeded' phase INFO[0042] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: InstallReady INFO[0043] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Installing INFO[0044] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Succeeded INFO[0044] Successfully upgraded to \"memcached-operator.v0.0.2\"",
"operator-sdk cleanup memcached-operator",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: \"olm.properties\": '[{\"type\": \"olm.maxOpenShiftVersion\", \"value\": \"<cluster_version>\"}]' 1",
"com.redhat.openshift.versions: \"v4.7-v4.9\" 1",
"LABEL com.redhat.openshift.versions=\"<versions>\" 1",
"operator-sdk scorecard <bundle_dir_or_image> [flags]",
"operator-sdk scorecard -h",
"./bundle └── tests └── scorecard └── config.yaml",
"kind: Configuration apiversion: scorecard.operatorframework.io/v1alpha3 metadata: name: config stages: - parallel: true tests: - image: quay.io/operator-framework/scorecard-test:v1.16.0 entrypoint: - scorecard-test - basic-check-spec labels: suite: basic test: basic-check-spec-test - image: quay.io/operator-framework/scorecard-test:v1.16.0 entrypoint: - scorecard-test - olm-bundle-validation labels: suite: olm test: olm-bundle-validation-test",
"make bundle",
"operator-sdk scorecard <bundle_dir_or_image>",
"{ \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"kind\": \"TestList\", \"items\": [ { \"kind\": \"Test\", \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"spec\": { \"image\": \"quay.io/operator-framework/scorecard-test:v1.16.0\", \"entrypoint\": [ \"scorecard-test\", \"olm-bundle-validation\" ], \"labels\": { \"suite\": \"olm\", \"test\": \"olm-bundle-validation-test\" } }, \"status\": { \"results\": [ { \"name\": \"olm-bundle-validation\", \"log\": \"time=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found metadata directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Getting mediaType info from manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Found annotations file\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Could not find optional dependencies file\\\" name=bundle-test\\n\", \"state\": \"pass\" } ] } } ] }",
"-------------------------------------------------------------------------------- Image: quay.io/operator-framework/scorecard-test:v1.16.0 Entrypoint: [scorecard-test olm-bundle-validation] Labels: \"suite\":\"olm\" \"test\":\"olm-bundle-validation-test\" Results: Name: olm-bundle-validation State: pass Log: time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found metadata directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Getting mediaType info from manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Found annotations file\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Could not find optional dependencies file\" name=bundle-test",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector=test=basic-check-spec-test",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector=suite=olm",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector='test in (basic-check-spec-test,olm-bundle-validation-test)'",
"apiVersion: scorecard.operatorframework.io/v1alpha3 kind: Configuration metadata: name: config stages: - parallel: true 1 tests: - entrypoint: - scorecard-test - basic-check-spec image: quay.io/operator-framework/scorecard-test:v1.16.0 labels: suite: basic test: basic-check-spec-test - entrypoint: - scorecard-test - olm-bundle-validation image: quay.io/operator-framework/scorecard-test:v1.16.0 labels: suite: olm test: olm-bundle-validation-test",
"// Copyright 2020 The Operator-SDK Authors // // Licensed under the Apache License, Version 2.0 (the \"License\"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an \"AS IS\" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package main import ( \"encoding/json\" \"fmt\" \"log\" \"os\" scapiv1alpha3 \"github.com/operator-framework/api/pkg/apis/scorecard/v1alpha3\" apimanifests \"github.com/operator-framework/api/pkg/manifests\" ) // This is the custom scorecard test example binary // As with the Redhat scorecard test image, the bundle that is under // test is expected to be mounted so that tests can inspect the // bundle contents as part of their test implementations. // The actual test is to be run is named and that name is passed // as an argument to this binary. This argument mechanism allows // this binary to run various tests all from within a single // test image. const PodBundleRoot = \"/bundle\" func main() { entrypoint := os.Args[1:] if len(entrypoint) == 0 { log.Fatal(\"Test name argument is required\") } // Read the pod's untar'd bundle from a well-known path. cfg, err := apimanifests.GetBundleFromDir(PodBundleRoot) if err != nil { log.Fatal(err.Error()) } var result scapiv1alpha3.TestStatus // Names of the custom tests which would be passed in the // `operator-sdk` command. switch entrypoint[0] { case CustomTest1Name: result = CustomTest1(cfg) case CustomTest2Name: result = CustomTest2(cfg) default: result = printValidTests() } // Convert scapiv1alpha3.TestResult to json. prettyJSON, err := json.MarshalIndent(result, \"\", \" \") if err != nil { log.Fatal(\"Failed to generate json\", err) } fmt.Printf(\"%s\\n\", string(prettyJSON)) } // printValidTests will print out full list of test names to give a hint to the end user on what the valid tests are. func printValidTests() scapiv1alpha3.TestStatus { result := scapiv1alpha3.TestResult{} result.State = scapiv1alpha3.FailState result.Errors = make([]string, 0) result.Suggestions = make([]string, 0) str := fmt.Sprintf(\"Valid tests for this image include: %s %s\", CustomTest1Name, CustomTest2Name) result.Errors = append(result.Errors, str) return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{result}, } } const ( CustomTest1Name = \"customtest1\" CustomTest2Name = \"customtest2\" ) // Define any operator specific custom tests here. // CustomTest1 and CustomTest2 are example test functions. Relevant operator specific // test logic is to be implemented in similarly. func CustomTest1(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest1Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func CustomTest2(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest2Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func wrapResult(r scapiv1alpha3.TestResult) scapiv1alpha3.TestStatus { return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{r}, } }",
"// Simple query nn := types.NamespacedName{ Name: \"cluster\", } infraConfig := &configv1.Infrastructure{} err = crClient.Get(context.Background(), nn, infraConfig) if err != nil { return err } fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.InfrastructureTopology)",
"operatorConfigInformer := configinformer.NewSharedInformerFactoryWithOptions(configClient, 2*time.Second) infrastructureLister = operatorConfigInformer.Config().V1().Infrastructures().Lister() infraConfig, err := configClient.ConfigV1().Infrastructures().Get(context.Background(), \"cluster\", metav1.GetOptions{}) if err != nil { return err } // fmt.Printf(\"%v\\n\", infraConfig) fmt.Printf(\"%v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"%v\\n\", infraConfig.Status.InfrastructureTopology)",
"../prometheus",
"package controllers import ( \"github.com/prometheus/client_golang/prometheus\" \"sigs.k8s.io/controller-runtime/pkg/metrics\" ) var ( widgets = prometheus.NewCounter( prometheus.CounterOpts{ Name: \"widgets_total\", Help: \"Number of widgets processed\", }, ) widgetFailures = prometheus.NewCounter( prometheus.CounterOpts{ Name: \"widget_failures_total\", Help: \"Number of failed widgets\", }, ) ) func init() { // Register custom metrics with the global prometheus registry metrics.Registry.MustRegister(widgets, widgetFailures) }",
"func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Add metrics widgets.Inc() widgetFailures.Inc() return ctrl.Result{}, nil }",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-k8s-role namespace: <operator_namespace> rules: - apiGroups: - \"\" resources: - endpoints - pods - services - nodes - secrets verbs: - get - list - watch",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: prometheus-k8s-rolebinding namespace: memcached-operator-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prometheus-k8s-role subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring",
"oc apply -f config/prometheus/role.yaml",
"oc apply -f config/prometheus/rolebinding.yaml",
"oc label namespace <operator_namespace> openshift.io/cluster-monitoring=\"true\"",
"operator-sdk init --plugins=ansible --domain=testmetrics.com",
"operator-sdk create api --group metrics --version v1 --kind Testmetrics --generate-role",
"--- tasks file for Memcached - name: start k8sstatus k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211 - osdk_metric: name: my_thing_counter description: This metric counts things counter: {} - osdk_metric: name: my_counter_metric description: Add 3.14 to the counter counter: increment: yes - osdk_metric: name: my_gauge_metric description: Create my gauge and set it to 2. gauge: set: 2 - osdk_metric: name: my_histogram_metric description: Observe my histogram histogram: observe: 2 - osdk_metric: name: my_summary_metric description: Observe my summary summary: observe: 2",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"apiVersion: metrics.testmetrics.com/v1 kind: Testmetrics metadata: name: testmetrics-sample spec: size: 1",
"oc create -f config/samples/metrics_v1_testmetrics.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE ansiblemetrics-controller-manager-<id> 2/2 Running 0 149m testmetrics-sample-memcached-<id> 1/1 Running 0 147m",
"oc get ep",
"NAME ENDPOINTS AGE ansiblemetrics-controller-manager-metrics-service 10.129.2.70:8443 150m",
"token=`oc sa get-token prometheus-k8s -n openshift-monitoring`",
"oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep my_counter",
"HELP my_counter_metric Add 3.14 to the counter TYPE my_counter_metric counter my_counter_metric 2",
"oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep gauge",
"HELP my_gauge_metric Create my gauge and set it to 2.",
"oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep Observe",
"HELP my_histogram_metric Observe my histogram HELP my_summary_metric Observe my summary",
"import ( \"github.com/operator-framework/operator-sdk/pkg/leader\" ) func main() { err = leader.Become(context.TODO(), \"memcached-operator-lock\") if err != nil { log.Error(err, \"Failed to retry for leader lock\") os.Exit(1) } }",
"import ( \"sigs.k8s.io/controller-runtime/pkg/manager\" ) func main() { opts := manager.Options{ LeaderElection: true, LeaderElectionID: \"memcached-operator-lock\" } mgr, err := manager.New(cfg, opts) }",
"cfg = Config{ log: logf.Log.WithName(\"prune\"), DryRun: false, Clientset: client, LabelSelector: \"app=<operator_name>\", Resources: []schema.GroupVersionKind{ {Group: \"\", Version: \"\", Kind: PodKind}, }, Namespaces: []string{\"default\"}, Strategy: StrategyConfig{ Mode: MaxCountStrategy, MaxCountSetting: 1, }, PreDeleteHook: myhook, }",
"err := cfg.Execute(ctx)",
"packagemanifests/ └── etcd ├── 0.0.1 │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml ├── 0.0.2 │ ├── etcdbackup.crd.yaml │ ├── etcdcluster.crd.yaml │ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml │ └── etcdrestore.crd.yaml └── etcd.package.yaml",
"bundle/ ├── bundle-0.0.1 │ ├── bundle.Dockerfile │ ├── manifests │ │ ├── etcdcluster.crd.yaml │ │ ├── etcdoperator.clusterserviceversion.yaml │ ├── metadata │ │ └── annotations.yaml │ └── tests │ └── scorecard │ └── config.yaml └── bundle-0.0.2 ├── bundle.Dockerfile ├── manifests │ ├── etcdbackup.crd.yaml │ ├── etcdcluster.crd.yaml │ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml │ ├── etcdrestore.crd.yaml ├── metadata │ └── annotations.yaml └── tests └── scorecard └── config.yaml",
"operator-sdk pkgman-to-bundle <package_manifests_dir> \\ 1 [--output-dir <directory>] \\ 2 --image-tag-base <image_name_base> 3",
"operator-sdk run bundle <bundle_image_name>:<tag>",
"INFO[0025] Successfully created registry pod: quay-io-my-etcd-0-9-4 INFO[0025] Created CatalogSource: etcd-catalog INFO[0026] OperatorGroup \"operator-sdk-og\" created INFO[0026] Created Subscription: etcdoperator-v0-9-4-sub INFO[0031] Approved InstallPlan install-5t58z for the Subscription: etcdoperator-v0-9-4-sub INFO[0031] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to reach 'Succeeded' phase INFO[0032] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to appear INFO[0048] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Pending INFO[0049] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Installing INFO[0064] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Succeeded INFO[0065] OLM has successfully installed \"etcdoperator.v0.9.4\"",
"operator-sdk <command> [<subcommand>] [<argument>] [<flags>]",
"operator-sdk completion bash",
"bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh",
"oc -n [namespace] edit cm hw-event-proxy-operator-manager-config",
"apiVersion: controller-runtime.sigs.k8s.io/v1alpha1 kind: ControllerManagerConfig health: healthProbeBindAddress: :8081 metrics: bindAddress: 127.0.0.1:8080 webhook: port: 9443 leaderElection: leaderElect: true resourceName: 6e7a703c.redhat-cne.org",
"oc get clusteroperator authentication -o yaml",
"oc -n openshift-monitoring edit cm cluster-monitoring-config",
"oc edit etcd cluster",
"oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml",
"oc get deployment -n openshift-ingress",
"oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}'",
"map[cidr:10.128.0.0/14 hostPrefix:23]",
"oc edit kubeapiserver",
"oc get clusteroperator openshift-controller-manager -o yaml",
"oc get crd openshiftcontrollermanagers.operator.openshift.io -o yaml"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html-single/operators/index |
Chapter 8. Updating and Migrating Identity Management | Chapter 8. Updating and Migrating Identity Management 8.1. Updating Identity Management You can use the yum utility to update the Identity Management packages on the system. Warning Before installing an update, make sure you have applied all previously released errata relevant to the RHEL system. For more information, see the How do I apply package updates to my RHEL system? KCS article. Additionally, if a new minor Red Hat Enterprise Linux version is available, such as 7.3, yum upgrades the Identity Management server or client to this version. Note This section does not describe migrating Identity Management from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7. If you want to migrate, see Section 8.2, "Migrating Identity Management from Red Hat Enterprise Linux 6 to Version 7" . 8.1.1. Considerations for Updating Identity Management After you update the Identity Management packages on at least one server, all other servers in the topology receive the updated schema, even if you do not update their packages. This ensures that any new entries which use the new schema can be replicated among the other servers. Downgrading Identity Management packages is not supported. Important Do not run the yum downgrade command on any of the ipa-* packages. Red Hat recommends upgrading to the version only. For example, if you want to upgrade to Identity Management for Red Hat Enterprise Linux 7.4, we recommend upgrading from Identity Management for Red Hat Enterprise Linux 7.3. Upgrading from earlier versions can cause problems. 8.1.2. Using yum to Update the Identity Management Packages To update all Identity Management packages on a server or client: Warning When upgrading multiple Identity Management servers, wait at least 10 minutes between each upgrade. When two or more servers are upgraded simultaneously or with only short intervals between the upgrades, there is not enough time to replicate the post-upgrade data changes throughout the topology, which can result in conflicting replication events. Related Information For details on using the yum utility, see Yum in the System Administrator's Guide . Important Due to CVE-2014-3566 , the Secure Socket Layer version 3 (SSLv3) protocol needs to be disabled in the mod_nss module. You can ensure that by following these steps: Edit the /etc/httpd/conf.d/nss.conf file and set the NSSProtocol parameter to TLSv1.0 (for backward compatibility), TLSv1.1 , and TLSv1.2 . Restart the httpd service. Note that Identity Management in Red Hat Enterprise Linux 7 automatically performs the above steps when the yum update ipa-* command is launched to upgrade the main packages. | [
"yum update ipa-*",
"NSSProtocol TLSv1.0,TLSv1.1,TLSv1.2",
"systemctl restart httpd.service"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/updating-migrating |
Chapter 7. Installing a cluster on AWS into an existing VPC | Chapter 7. Installing a cluster on AWS into an existing VPC In OpenShift Container Platform version 4.14, you can install a cluster into an existing Amazon Virtual Private Cloud (VPC) on Amazon Web Services (AWS). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. If the existing VPC is owned by a different account than the cluster, you shared the VPC between accounts. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 7.2. About using a custom VPC In OpenShift Container Platform 4.14, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 7.2.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Amazon VPC console wizard configurations and Work with VPCs and subnets in the AWS documentation for more information on creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: Create a public and private subnet for each availability zone that your cluster uses. Each availability zone can contain no more than one public and one private subnet. For an example of this type of configuration, see VPC with public and private subnets (NAT) in the AWS documentation. Record each subnet ID. Completing the installation requires that you enter these values in the platform section of the install-config.yaml file. See Finding a subnet ID in the AWS documentation. The VPC's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The subnet CIDR blocks must belong to the machine CIDR that you specify. The VPC must have a public internet gateway attached to it. For each availability zone: The public subnet requires a route to the internet gateway. The public subnet requires a NAT gateway with an EIP address. The private subnet requires a route to the NAT gateway in public subnet. The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone and platform.aws.hostedZoneRole fields in the install-config.yaml file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the Passthrough or Manual credentials mode. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 7.2.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 7.2.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 7.2.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 7.2.5. AWS security groups By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified. However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. As part of the installation process, you apply custom security groups by modifying the install-config.yaml file before deploying the cluster. For more information, see "Applying existing AWS security groups to the cluster". 7.2.6. Modifying trust policy when installing into a shared VPC If you install your cluster using a shared VPC, you can use the Passthrough or Manual credentials mode. You must add the IAM role used to install the cluster as a principal in the trust policy of the account that owns the VPC. If you use Passthrough mode, add the Amazon Resource Name (ARN) of the account that creates the cluster, such as arn:aws:iam::123456789012:user/clustercreator , to the trust policy as a principal. If you use Manual mode, add the ARN of the account that creates the cluster as well as the ARN of the ingress operator role in the cluster owner account, such as arn:aws:iam::123456789012:role/<cluster-name>-openshift-ingress-operator-cloud-credentials , to the trust policy as principals. You must add the following actions to the policy: Example 7.1. Required actions for shared VPC installation route53:ChangeResourceRecordSets route53:ListHostedZones route53:ListHostedZonesByName route53:ListResourceRecordSets route53:ChangeTagsForResource route53:GetAccountLimit route53:GetChange route53:GetHostedZone route53:ListTagsForResource route53:UpdateHostedZoneComment tag:GetResources tag:UntagResources 7.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 7.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 7.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select AWS as the platform to target. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for AWS 7.6.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 7.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 7.6.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 7.2. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 7.6.3. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 7.3. Machine types based on 64-bit ARM architecture c6g.* m6g.* r8g.* 7.6.4. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{"auths": ...}' 22 1 12 14 22 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 17 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 18 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 19 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 7.6.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.6.6. Applying existing AWS security groups to the cluster Applying existing AWS security groups to your control plane and compute machines can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. Prerequisites You have created the security groups in AWS. For more information, see the AWS documentation about working with security groups . The security groups must be associated with the existing VPC that you are deploying the cluster to. The security groups cannot be associated with another VPC. You have an existing install-config.yaml file. Procedure In the install-config.yaml file, edit the compute.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your compute machines. Edit the controlPlane.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your control plane machines. Save the file and reference it when deploying the cluster. Sample install-config.yaml file that specifies custom security groups # ... compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3 1 Specify the name of the security group as it appears in the Amazon EC2 console, including the sg prefix. 2 Specify subnets for each availability zone that your cluster uses. 7.7. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 7.8. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 7.8.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 7.8.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 7.8.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 7.4. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 7.5. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 7.8.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 7.8.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 7.8.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 7.8.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 7.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 7.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin /validating-an-installation.adoc 7.11. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 7.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service. 7.13. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{\"auths\": ...}' 22",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"ccoctl aws create-key-pair",
"2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer",
"ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3",
"2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_aws/installing-aws-vpc |
Customizing the GNOME desktop environment | Customizing the GNOME desktop environment Red Hat Enterprise Linux 9 Customizing the GNOME desktop environment on Red Hat Enterprise Linux 9 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/customizing_the_gnome_desktop_environment/index |
Planning your deployment | Planning your deployment Red Hat OpenShift Data Foundation 4.14 Important considerations when deploying Red Hat OpenShift Data Foundation 4.14 Red Hat Storage Documentation Team Abstract Read this document for important considerations when planning your Red Hat OpenShift Data Foundation deployment. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/planning_your_deployment/index |
Chapter 6. Upgrading Data Grid clusters | Chapter 6. Upgrading Data Grid clusters Data Grid Operator lets you upgrade Data Grid clusters from one version to another without downtime or data loss. Important Hot Rod rolling upgrades are available as a technology preview feature. 6.1. Technology preview features Technology preview features or capabilities are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using technology preview features or capabilities for production. These features provide early access to upcoming product features, which enables you to test functionality and provide feedback during the development process. For more information, see Red Hat Technology Preview Features Support Scope . 6.2. Data Grid cluster upgrades The spec.upgrades.type field controls how Data Grid Operator upgrades your Data Grid cluster when new versions become available. There are two types of cluster upgrade: Shutdown Upgrades Data Grid clusters with service downtime. This is the default upgrade type. HotRodRolling Upgrades Data Grid clusters without service downtime. Shutdown upgrades To perform a shutdown upgrade, Data Grid Operator does the following: Gracefully shuts down the existing cluster. Removes the existing cluster. Creates a new cluster with the target version. Hot Rod rolling upgrades To perform a Hot Rod rolling upgrade, Data Grid Operator does the following: Creates a new Data Grid cluster with the target version that runs alongside your existing cluster. Creates a remote cache store to transfer data from the existing cluster to the new cluster. Redirects all clients to the new cluster. Removes the existing cluster when all data and client connections are transferred to the new cluster. Important You should not perform Hot Rod rolling upgrades with caches that enable passivation with persistent cache stores. In the event that the upgrade does not complete successfully, passivation can result in data loss when Data Grid Operator rolls back the target cluster. If your cache configuration enables passivation you should perform a shutdown upgrade. 6.3. Upgrading Data Grid clusters with downtime Upgrading Data Grid clusters with downtime results in service disruption but does not require any additional capacity. Prerequisites The Data Grid Operator version you have installed supports the Data Grid target version. If required, configure a persistent cache store to preserve your data during the upgrade. Important At the start of the upgrade process Data Grid Operator shuts down your existing cluster. This results in data loss if you do not configure a persistent cache store. Procedure Specify the Data Grid version number in the spec.version field. Ensure that Shutdown is set as the value for the spec.upgrades.type field, which is the default. Apply your changes, if necessary. When new Data Grid version becomes available, you must manually change the value in the spec.version field to trigger the upgrade. 6.4. Performing Hot Rod rolling upgrades for Data Grid clusters Performing Hot Rod rolling upgrades lets you move to a new Data Grid version without service disruption. However, this upgrade type requires additional capacity and temporarily results in two Data Grid clusters with different versions running concurrently. Prerequisite The Data Grid Operator version you have installed supports the Data Grid target version. Procedure Specify the Data Grid version number in the spec.version field. Specify HotRodRolling as the value for the spec.upgrades.type field. Apply your changes. When new Data Grid version becomes available, you must manually change the value in the spec.version field to trigger the upgrade. 6.4.1. Recovering from a failed Hot Rod rolling upgrade You can roll back a failed Hot Rod rolling upgrade to the version if the original cluster is still present. Prerequisites Hot Rod rolling upgrade is in progress and the initial Data Grid cluster is present. Procedure Ensure the Hot Rod rolling upgrade is in progress. The status.hotRodRollingUpgradeStatus field must be present. Update spec.version field of your Infinispan CR to the original cluster version defined in the status.hotRodRollingUpgradeStatus . Data Grid Operator deletes the newly created cluster. | [
"spec: version: 8.4.6-1 upgrades: type: Shutdown",
"spec: version: 8.4.6-1 upgrades: type: HotRodRolling",
"get infinispan <cr_name> -o yaml"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_operator_guide/upgrading-clusters |
Chapter 6. Performing and configuring basic builds | Chapter 6. Performing and configuring basic builds The following sections provide instructions for basic build operations, including starting and canceling builds, editing BuildConfigs , deleting BuildConfigs , viewing build details, and accessing build logs. 6.1. Starting a build You can manually start a new build from an existing build configuration in your current project. Procedure To start a build manually, enter the following command: USD oc start-build <buildconfig_name> 6.1.1. Re-running a build You can manually re-run a build using the --from-build flag. Procedure To manually re-run a build, enter the following command: USD oc start-build --from-build=<build_name> 6.1.2. Streaming build logs You can specify the --follow flag to stream the build's logs in stdout . Procedure To manually stream a build's logs in stdout , enter the following command: USD oc start-build <buildconfig_name> --follow 6.1.3. Setting environment variables when starting a build You can specify the --env flag to set any desired environment variable for the build. Procedure To specify a desired environment variable, enter the following command: USD oc start-build <buildconfig_name> --env=<key>=<value> 6.1.4. Starting a build with source Rather than relying on a Git source pull for a build, you can also start a build by directly pushing your source, which could be the contents of a Git or SVN working directory, a set of pre-built binary artifacts you want to deploy, or a single file. This can be done by specifying one of the following options for the start-build command: Option Description --from-dir=<directory> Specifies a directory that will be archived and used as a binary input for the build. --from-file=<file> Specifies a single file that will be the only file in the build source. The file is placed in the root of an empty directory with the same file name as the original file provided. --from-repo=<local_source_repo> Specifies a path to a local repository to use as the binary input for a build. Add the --commit option to control which branch, tag, or commit is used for the build. When passing any of these options directly to the build, the contents are streamed to the build and override the current build source settings. Note Builds triggered from binary input will not preserve the source on the server, so rebuilds triggered by base image changes will use the source specified in the build configuration. Procedure To start a build from a source code repository and send the contents of a local Git repository as an archive from the tag v2 , enter the following command: USD oc start-build hello-world --from-repo=../hello-world --commit=v2 6.2. Canceling a build You can cancel a build using the web console, or with the following CLI command. Procedure To manually cancel a build, enter the following command: USD oc cancel-build <build_name> 6.2.1. Canceling multiple builds You can cancel multiple builds with the following CLI command. Procedure To manually cancel multiple builds, enter the following command: USD oc cancel-build <build1_name> <build2_name> <build3_name> 6.2.2. Canceling all builds You can cancel all builds from the build configuration with the following CLI command. Procedure To cancel all builds, enter the following command: USD oc cancel-build bc/<buildconfig_name> 6.2.3. Canceling all builds in a given state You can cancel all builds in a given state, such as new or pending , while ignoring the builds in other states. Procedure To cancel all in a given state, enter the following command: USD oc cancel-build bc/<buildconfig_name> 6.3. Editing a BuildConfig To edit your build configurations, you use the Edit BuildConfig option in the Builds view of the Developer perspective. You can use either of the following views to edit a BuildConfig : The Form view enables you to edit your BuildConfig using the standard form fields and checkboxes. The YAML view enables you to edit your BuildConfig with full control over the operations. You can switch between the Form view and YAML view without losing any data. The data in the Form view is transferred to the YAML view and vice versa. Procedure In the Builds view of the Developer perspective, click the Options menu to see the Edit BuildConfig option. Click Edit BuildConfig to see the Form view option. In the Git section, enter the Git repository URL for the codebase you want to use to create an application. The URL is then validated. Optional: Click Show Advanced Git Options to add details such as: Git Reference to specify a branch, tag, or commit that contains code you want to use to build the application. Context Dir to specify the subdirectory that contains code you want to use to build the application. Source Secret to create a Secret Name with credentials for pulling your source code from a private repository. In the Build from section, select the option that you would like to build from. You can use the following options: Image Stream tag references an image for a given image stream and tag. Enter the project, image stream, and tag of the location you would like to build from and push to. Image Stream image references an image for a given image stream and image name. Enter the image stream image you would like to build from. Also enter the project, image stream, and tag to push to. Docker image : The Docker image is referenced through a Docker image repository. You will also need to enter the project, image stream, and tag to refer to where you would like to push to. Optional: In the Environment Variables section, add the environment variables associated with the project by using the Name and Value fields. To add more environment variables, use Add Value , or Add from ConfigMap and Secret . Optional: To further customize your application, use the following advanced options: Trigger Triggers a new image build when the builder image changes. Add more triggers by clicking Add Trigger and selecting the Type and Secret . Secrets Adds secrets for your application. Add more secrets by clicking Add secret and selecting the Secret and Mount point . Policy Click Run policy to select the build run policy. The selected policy determines the order in which builds created from the build configuration must run. Hooks Select Run build hooks after image is built to run commands at the end of the build and verify the image. Add Hook type , Command , and Arguments to append to the command. Click Save to save the BuildConfig . 6.4. Deleting a BuildConfig You can delete a BuildConfig using the following command. Procedure To delete a BuildConfig , enter the following command: USD oc delete bc <BuildConfigName> This also deletes all builds that were instantiated from this BuildConfig . To delete a BuildConfig and keep the builds instatiated from the BuildConfig , specify the --cascade=false flag when you enter the following command: USD oc delete --cascade=false bc <BuildConfigName> 6.5. Viewing build details You can view build details with the web console or by using the oc describe CLI command. This displays information including: The build source. The build strategy. The output destination. Digest of the image in the destination registry. How the build was created. If the build uses the Source strategy, the oc describe output also includes information about the source revision used for the build, including the commit ID, author, committer, and message. Procedure To view build details, enter the following command: USD oc describe build <build_name> 6.6. Accessing build logs You can access build logs using the web console or the CLI. Procedure To stream the logs using the build directly, enter the following command: USD oc describe build <build_name> 6.6.1. Accessing BuildConfig logs You can access BuildConfig logs using the web console or the CLI. Procedure To stream the logs of the latest build for a BuildConfig , enter the following command: USD oc logs -f bc/<buildconfig_name> 6.6.2. Accessing BuildConfig logs for a given version build You can access logs for a given version build for a BuildConfig using the web console or the CLI. Procedure To stream the logs for a given version build for a BuildConfig , enter the following command: USD oc logs --version=<number> bc/<buildconfig_name> 6.6.3. Enabling log verbosity You can enable a more verbose output by passing the BUILD_LOGLEVEL environment variable as part of the sourceStrategy in a BuildConfig . Note An administrator can set the default build verbosity for the entire Red Hat OpenShift Service on AWS instance by configuring env/BUILD_LOGLEVEL . This default can be overridden by specifying BUILD_LOGLEVEL in a given BuildConfig . You can specify a higher priority override on the command line for non-binary builds by passing --build-loglevel to oc start-build . Available log levels for source builds are as follows: Level 0 Produces output from containers running the assemble script and all encountered errors. This is the default. Level 1 Produces basic information about the executed process. Level 2 Produces very detailed information about the executed process. Level 3 Produces very detailed information about the executed process, and a listing of the archive contents. Level 4 Currently produces the same information as level 3. Level 5 Produces everything mentioned on levels and additionally provides docker push messages. Procedure To enable more verbose output, pass the BUILD_LOGLEVEL environment variable as part of the sourceStrategy or dockerStrategy in a BuildConfig : sourceStrategy: ... env: - name: "BUILD_LOGLEVEL" value: "2" 1 1 Adjust this value to the desired log level. | [
"oc start-build <buildconfig_name>",
"oc start-build --from-build=<build_name>",
"oc start-build <buildconfig_name> --follow",
"oc start-build <buildconfig_name> --env=<key>=<value>",
"oc start-build hello-world --from-repo=../hello-world --commit=v2",
"oc cancel-build <build_name>",
"oc cancel-build <build1_name> <build2_name> <build3_name>",
"oc cancel-build bc/<buildconfig_name>",
"oc cancel-build bc/<buildconfig_name>",
"oc delete bc <BuildConfigName>",
"oc delete --cascade=false bc <BuildConfigName>",
"oc describe build <build_name>",
"oc describe build <build_name>",
"oc logs -f bc/<buildconfig_name>",
"oc logs --version=<number> bc/<buildconfig_name>",
"sourceStrategy: env: - name: \"BUILD_LOGLEVEL\" value: \"2\" 1"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/builds_using_buildconfig/basic-build-operations |
9.2. Virtual Network Interface Cards | 9.2. Virtual Network Interface Cards 9.2.1. vNIC Profile Overview A Virtual Network Interface Card (vNIC) profile is a collection of settings that can be applied to individual virtual network interface cards in the Manager. A vNIC profile allows you to apply Network QoS profiles to a vNIC, enable or disable port mirroring, and add or remove custom properties. A vNIC profile also offers an added layer of administrative flexibility in that permission to use (consume) these profiles can be granted to specific users. In this way, you can control the quality of service that different users receive from a given network. 9.2.2. Creating or Editing a vNIC Profile Create or edit a Virtual Network Interface Controller (vNIC) profile to regulate network bandwidth for users and groups. Note If you are enabling or disabling port mirroring, all virtual machines using the associated profile must be in a down state before editing. Creating or Editing a vNIC Profile Click Network Networks . Click the logical network's name to open the details view. Click the vNIC Profiles tab. Click New or Edit . Enter the Name and Description of the profile. Select the relevant Quality of Service policy from the QoS list. Select a Network Filter from the drop-down list to manage the traffic of network packets to and from virtual machines. For more information on network filters, see Applying network filtering in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide . Select the Passthrough check box to enable passthrough of the vNIC and allow direct device assignment of a virtual function. Enabling the passthrough property will disable QoS, network filtering, and port mirroring as these are not compatible. For more information on passthrough, see Section 9.2.4, "Enabling Passthrough on a vNIC Profile" . If Passthrough is selected, optionally deselect the Migratable check box to disable migration for vNICs using this profile. If you keep this check box selected, see Additional Prerequisites for Virtual Machines with SR-IOV-Enabled vNICs in the Virtual Machine Management Guide . Use the Port Mirroring and Allow all users to use this Profile check boxes to toggle these options. Select a custom property from the custom properties list, which displays Please select a key... by default. Use the + and - buttons to add or remove custom properties. Click OK . Apply this profile to users and groups to regulate their network bandwidth. If you edited a vNIC profile, you must either restart the virtual machine, or hot unplug and then hot plug the vNIC if the guest operating system supports vNIC hot plug and hot unplug. 9.2.3. Explanation of Settings in the VM Interface Profile Window Table 9.5. VM Interface Profile Window Field Name Description Network A drop-down list of the available networks to apply the vNIC profile to. Name The name of the vNIC profile. This must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores between 1 and 50 characters. Description The description of the vNIC profile. This field is recommended but not mandatory. QoS A drop-down list of the available Network Quality of Service policies to apply to the vNIC profile. QoS policies regulate inbound and outbound network traffic of the vNIC. Network Filter A drop-down list of the available network filters to apply to the vNIC profile. Network filters improve network security by filtering the type of packets that can be sent to and from virtual machines. The default filter is vdsm-no-mac-spoofing , which is a combination of no-mac-spoofing and no-arp-mac-spoofing . For more information on the network filters provided by libvirt, see the Pre-existing network filters section of the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide . Use <No Network Filter> for virtual machine VLANs and bonds. On trusted virtual machines, choosing not to use a network filter can improve performance. Note Red Hat no longer supports disabling filters by setting the EnableMACAntiSpoofingFilterRules parameter to false using the engine-config tool. Use the <No Network Filter> option instead. Passthrough A check box to toggle the passthrough property. Passthrough allows a vNIC to connect directly to a virtual function of a host NIC. The passthrough property cannot be edited if the vNIC profile is attached to a virtual machine. QoS, network filters, and port mirroring are disabled in the vNIC profile if passthrough is enabled. Migratable A check box to toggle whether or not vNICs using this profile can be migrated. Migration is enabled by default on regular vNIC profiles; the check box is selected and cannot be changed. When the Passthrough check box is selected, Migratable becomes available and can be deselected, if required, to disable migration of passthrough vNICs. Port Mirroring A check box to toggle port mirroring. Port mirroring copies layer 3 network traffic on the logical network to a virtual interface on a virtual machine. It it not selected by default. For further details, see Port Mirroring in the Technical Reference . Device Custom Properties A drop-down menu to select available custom properties to apply to the vNIC profile. Use the + and - buttons to add and remove properties respectively. Allow all users to use this Profile A check box to toggle the availability of the profile to all users in the environment. It is selected by default. 9.2.4. Enabling Passthrough on a vNIC Profile Note This is one in a series of topics that show how to set up and configure SR-IOV on Red Hat Virtualization. For more information, see Setting Up and Configuring SR-IOV The passthrough property of a vNIC profile enables a vNIC to be directly connected to a virtual function (VF) of an SR-IOV-enabled NIC. The vNIC will then bypass the software network virtualization and connect directly to the VF for direct device assignment. The passthrough property cannot be enabled if the vNIC profile is already attached to a vNIC; this procedure creates a new profile to avoid this. If a vNIC profile has passthrough enabled, QoS, network filters, and port mirroring cannot be enabled on the same profile. For more information on SR-IOV, direct device assignment, and the hardware considerations for implementing these in Red Hat Virtualization, see Hardware Considerations for Implementing SR-IOV . Enabling Passthrough Click Network Networks . Click the logical network's name to open the details view. Click the vNIC Profiles tab to list all vNIC profiles for that logical network. Click New . Enter the Name and Description of the profile. Select the Passthrough check box. Optionally deselect the Migratable check box to disable migration for vNICs using this profile. If you keep this check box selected, see Additional Prerequisites for Virtual Machines with SR-IOV-Enabled vNICs in the Virtual Machine Management Guide . If necessary, select a custom property from the custom properties list, which displays Please select a key... by default. Use the + and - buttons to add or remove custom properties. Click OK . The vNIC profile is now passthrough-capable. To use this profile to directly attach a virtual machine to a NIC or PCI VF, attach the logical network to the NIC and create a new PCI Passthrough vNIC on the desired virtual machine that uses the passthrough vNIC profile. For more information on these procedures respectively, see Section 9.4.2, "Editing Host Network Interfaces and Assigning Logical Networks to Hosts" , and Adding a New Network Interface in the Virtual Machine Management Guide . 9.2.5. Removing a vNIC Profile Remove a vNIC profile to delete it from your virtualized environment. Removing a vNIC Profile Click Network Networks . Click the logical network's name to open the details view. Click the vNIC Profiles tab to display available vNIC profiles. Select one or more profiles and click Remove . Click OK . 9.2.6. Assigning Security Groups to vNIC Profiles Note This feature is only available when OpenStack Networking (neutron) is added as an external network provider. Security groups cannot be created through the Red Hat Virtualization Manager. You must create security groups through OpenStack. For more information, see Project Security Management in the Red Hat OpenStack Platform Users and Identity Management Guide . You can assign security groups to the vNIC profile of networks that have been imported from an OpenStack Networking instance and that use the Open vSwitch plug-in. A security group is a collection of strictly enforced rules that allow you to filter inbound and outbound traffic over a network interface. The following procedure outlines how to attach a security group to a vNIC profile. Note A security group is identified using the ID of that security group as registered in the OpenStack Networking instance. You can find the IDs of security groups for a given tenant by running the following command on the system on which OpenStack Networking is installed: Assigning Security Groups to vNIC Profiles Click Network Networks . Click the logical network's name to open the details view. Click the vNIC Profiles tab. Click New , or select an existing vNIC profile and click Edit . From the custom properties drop-down list, select SecurityGroups . Leaving the custom property drop-down blank applies the default security settings, which permit all outbound traffic and intercommunication but deny all inbound traffic from outside of the default security group. Note that removing the SecurityGroups property later will not affect the applied security group. In the text field, enter the ID of the security group to attach to the vNIC profile. Click OK . You have attached a security group to the vNIC profile. All traffic through the logical network to which that profile is attached will be filtered in accordance with the rules defined for that security group. 9.2.7. User Permissions for vNIC Profiles Configure user permissions to assign users to certain vNIC profiles. Assign the VnicProfileUser role to a user to enable them to use the profile. Restrict users from certain profiles by removing their permission for that profile. User Permissions for vNIC Profiles Click Network vNIC Profile . Click the vNIC profile's name to open the details view. Click the Permissions tab to show the current user permissions for the profile. Click Add or Remove to change user permissions for the vNIC profile. In the Add Permissions to User window, click My Groups to display your user groups. You can use this option to grant permissions to other users in your groups. You have configured user permissions for a vNIC profile. 9.2.8. Configuring vNIC Profiles for UCS Integration Cisco's Unified Computing System (UCS) is used to manage data center aspects such as computing, networking and storage resources. The vdsm-hook-vmfex-dev hook allows virtual machines to connect to Cisco's UCS-defined port profiles by configuring the vNIC profile. The UCS-defined port profiles contain the properties and settings used to configure virtual interfaces in UCS. The vdsm-hook-vmfex-dev hook is installed by default with VDSM. See Appendix A, VDSM and Hooks for more information. When a virtual machine that uses the vNIC profile is created, it will use the Cisco vNIC. The procedure to configure the vNIC profile for UCS integration involves first configuring a custom device property. When configuring the custom device property, any existing value it contained is overwritten. When combining new and existing custom properties, include all of the custom properties in the command used to set the key's value. Multiple custom properties are separated by a semi-colon. Note A UCS port profile must be configured in Cisco UCS before configuring the vNIC profile. Configuring the Custom Device Property On the Red Hat Virtualization Manager, configure the vmfex custom property and set the cluster compatibility level using --cver . Verify that the vmfex custom device property was added. Restart the ovirt-engine service. The vNIC profile to configure can belong to a new or existing logical network. See Section 9.1.2, "Creating a New Logical Network in a Data Center or Cluster" for instructions to configure a new logical network. Configuring a vNIC Profile for UCS Integration Click Network Networks . Click the logical network's name to open the details view. Click the vNIC Profiles tab. Click New , or select a vNIC profile and click Edit . Enter the Name and Description of the profile. Select the vmfex custom property from the custom properties list and enter the UCS port profile name. Click OK . | [
"neutron security-group-list",
"engine-config -s CustomDeviceProperties='{type=interface;prop={vmfex=^[a-zA-Z0-9_.-]{2,32}USD}}' --cver=3.6",
"engine-config -g CustomDeviceProperties",
"systemctl restart ovirt-engine.service"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-Virtual_Network_Interface_Cards |
Chapter 1. Backing up storage data from Amazon EBS | Chapter 1. Backing up storage data from Amazon EBS Red Hat recommends that you back up the data on your persistent volume claims (PVCs) regularly. Backing up your data is particularly important before deleting a user and before uninstalling OpenShift AI, as all PVCs are deleted when OpenShift AI is uninstalled. Prerequisites You have credentials for OpenShift Cluster Manager ( https://console.redhat.com/openshift/ ). You have administrator access to the OpenShift Dedicated cluster. You have credentials for the Amazon Web Services (AWS) account that the OpenShift Dedicated cluster is deployed under. Procedure Determine the IDs of the persistent volumes (PVs) that you want to back up. In the OpenShift Dedicated web console, change into the Administrator perspective. Click Home Projects . Click the rhods-notebooks project. The Details page for the project opens. Click the PersistentVolumeClaims in the Inventory section. The PersistentVolumeClaims page opens. Note the ID of the persistent volume (PV) that you want to back up. Note The persistent volumes (PV) that you make a note of are required to identify the correct EBS volume to back up in your AWS instance. Locate the EBS volume containing the PVs that you want to back up. See Amazon Web Services documentation: Create Amazon EBS snapshots for more information. Log in to AWS ( https://aws.amazon.com ) and ensure that you are viewing the region that your OpenShift Dedicated cluster is deployed in. Click Services . Click Compute EC2 . Click Elastic Block Storage Volumes in the side navigation. The Volumes page opens. In the search bar, enter the ID of the persistent volume (PV) that you made a note of earlier. The Volumes page reloads to display the search results. Click on the volume shown and verify that any kubernetes.io/created-for/pvc/namespace tags contain the value rhods-notebooks , and any kubernetes.io/created-for/pvc/name tags match the name of the persistent volume that the EC2 volume is being used for, for example, jupyter-nb-user1-pvc . Back up the EBS volume that contains your persistent volume (PV). Right-click on the volume that you want to back up and select Create Snapshot from the list. The Create Snapshot page opens. Enter a Description for the volume. Click Create Snapshot . The snapshot of the volume is created. Click Close . Verification The snapshot that you created is visible on the Snapshots page in AWS. Additional resources Amazon Web Services documentation: Create Amazon EBS snapshots | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/uninstalling_openshift_ai_cloud_service/backing-up-storage-data-from-amazon-ebs_install |
Providing feedback on Red Hat JBoss Web Server documentation | Providing feedback on Red Hat JBoss Web Server documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Create creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_service_pack_5_release_notes/providing-direct-documentation-feedback_6.0.5_rn |
10.5. Programming Languages | 10.5. Programming Languages Ruby 2.0.0 Red Hat Enterprise Linux 7 provides the latest Ruby version, 2.0.0. The most notable of the changes between version 2.0.0 and 1.8.7 included in Red Hat Enterprise Linux 6 are the following: New interpreter, YARV (yet another Ruby VM), which significantly reduces loading times, especially for applications with large trees or files; New and faster "Lazy Sweep" garbage collector; Ruby now supports string encoding; Ruby now supports native threads instead of green threads. For more information about Ruby 2.0.0, consult the upstream pages of the project: https://www.ruby-lang.org/en/ . Python 2.7.5 Red Hat Enterprise Linux 7 includes Python 2.7.5, which is the latest Python 2.7 series release. This version contains many improvements in performance and provides forward compatibility with Python 3. The most notable of the changes in Python 2.7.5 are the following: An ordered dictionary type; A faster I/O module; Dictionary comprehensions and set comprehensions; The sysconfig module. For the full list of changes, see http://docs.python.org/dev/whatsnew/2.7.html Java 7 and Multiple JDKs Red Hat Enterprise Linux 7 features OpenJDK7 as the default Java Development Kit (JDK) and Java 7 as the default Java version. All Java 7 packages ( java-1.7.0-openjdk , java-1.7.0-oracle , java-1.7.1-ibm ) allow installation of multiple versions in parallel, similarly to the kernel. The ability of parallel installation allows users to try out multiple versions of the same JDK simultaneously, to tune performance and debug problems if needed. The precise JDK is selectable through /etc/alternatives/ as before. Important The Optional channel must be enabled in order to successfully install the java-1.7.1-ibm-jdbc or java-1.7.1-ibm-plugin packages from the Supplementary channel. The Optional channel contains packages that satisfy dependencies of the desired Java packages. Before installing packages from the Optional and Supplementary channels, see Scope of Coverage Details . Information on subscribing to the Optional and Supplementary channels can be found in the Red Hat Knowledgebase solution How to access Optional and Supplementary channels . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/sect-red_hat_enterprise_linux-7.0_release_notes-compiler_and_tools-programming_languages |
Chapter 5. Sending traces and metrics to the OpenTelemetry Collector | Chapter 5. Sending traces and metrics to the OpenTelemetry Collector You can set up and use the Red Hat build of OpenTelemetry to send traces to the OpenTelemetry Collector or the TempoStack instance. Sending traces and metrics to the OpenTelemetry Collector is possible with or without sidecar injection. 5.1. Sending traces and metrics to the OpenTelemetry Collector with sidecar injection You can set up sending telemetry data to an OpenTelemetry Collector instance with sidecar injection. The Red Hat build of OpenTelemetry Operator allows sidecar injection into deployment workloads and automatic configuration of your instrumentation to send telemetry data to the OpenTelemetry Collector. Prerequisites The Red Hat OpenShift distributed tracing platform (Tempo) is installed, and a TempoStack instance is deployed. You have access to the cluster through the web console or the OpenShift CLI ( oc ): You are logged in to the web console as a cluster administrator with the cluster-admin role. An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Procedure Create a project for an OpenTelemetry Collector instance. apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability Create a service account. apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar namespace: observability Grant the permissions to the service account for the k8sattributes and resourcedetection processors. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-sidecar namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Deploy the OpenTelemetry Collector as a sidecar. apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: serviceAccount: otel-collector-sidecar mode: sidecar config: serviceAccount: otel-collector-sidecar receivers: otlp: protocols: grpc: {} http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: "tempo-<example>-gateway:8090" 1 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp] 1 This points to the Gateway of the TempoStack instance deployed by using the <example> Tempo Operator. Create your deployment using the otel-collector-sidecar service account. Add the sidecar.opentelemetry.io/inject: "true" annotation to your Deployment object. This will inject all the needed environment variables to send data from your workloads to the OpenTelemetry Collector instance. 5.2. Sending traces and metrics to the OpenTelemetry Collector without sidecar injection You can set up sending telemetry data to an OpenTelemetry Collector instance without sidecar injection, which involves manually setting several environment variables. Prerequisites The Red Hat OpenShift distributed tracing platform (Tempo) is installed, and a TempoStack instance is deployed. You have access to the cluster through the web console or the OpenShift CLI ( oc ): You are logged in to the web console as a cluster administrator with the cluster-admin role. An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Procedure Create a project for an OpenTelemetry Collector instance. apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability Create a service account. apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability Grant the permissions to the service account for the k8sattributes and resourcedetection processors. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Deploy the OpenTelemetry Collector instance with the OpenTelemetryCollector custom resource. apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: "tempo-<example>-distributor:4317" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] 1 This points to the Gateway of the TempoStack instance deployed by using the <example> Tempo Operator. Set the environment variables in the container with your instrumented application. Name Description Default value OTEL_SERVICE_NAME Sets the value of the service.name resource attribute. "" OTEL_EXPORTER_OTLP_ENDPOINT Base endpoint URL for any signal type with an optionally specified port number. https://localhost:4317 OTEL_EXPORTER_OTLP_CERTIFICATE Path to the certificate file for the TLS credentials of the gRPC client. https://localhost:4317 OTEL_TRACES_SAMPLER Sampler to be used for traces. parentbased_always_on OTEL_EXPORTER_OTLP_PROTOCOL Transport protocol for the OTLP exporter. grpc OTEL_EXPORTER_OTLP_TIMEOUT Maximum time interval for the OTLP exporter to wait for each batch export. 10s OTEL_EXPORTER_OTLP_INSECURE Disables client transport security for gRPC requests. An HTTPS schema overrides it. False | [
"apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar namespace: observability",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-sidecar namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: serviceAccount: otel-collector-sidecar mode: sidecar config: serviceAccount: otel-collector-sidecar receivers: otlp: protocols: grpc: {} http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: \"tempo-<example>-gateway:8090\" 1 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp]",
"apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-<example>-distributor:4317\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/red_hat_build_of_opentelemetry/otel-sending-traces-and-metrics-to-otel-collector |
Chapter 6. Creating a virtual machine | Chapter 6. Creating a virtual machine 6.1. Creating virtual machines from instance types You can simplify virtual machine (VM) creation by using instance types, whether you use the Red Hat OpenShift Service on AWS web console or the CLI to create VMs. Note Creating a VM from an instance type in OpenShift Virtualization 4.15 and higher is supported on Red Hat OpenShift Service on AWS clusters. In OpenShift Virtualization 4.14, creating a VM from an instance type is a Technology Preview feature and is not supported on Red Hat OpenShift Service on AWS clusters. 6.1.1. About instance types An instance type is a reusable object where you can define resources and characteristics to apply to new VMs. You can define custom instance types or use the variety that are included when you install OpenShift Virtualization. To create a new instance type, you must first create a manifest, either manually or by using the virtctl CLI tool. You then create the instance type object by applying the manifest to your cluster. OpenShift Virtualization provides two CRDs for configuring instance types: A namespaced object: VirtualMachineInstancetype A cluster-wide object: VirtualMachineClusterInstancetype These objects use the same VirtualMachineInstancetypeSpec . 6.1.1.1. Required attributes When you configure an instance type, you must define the cpu and memory attributes. Other attributes are optional. Note When you create a VM from an instance type, you cannot override any parameters defined in the instance type. Because instance types require defined CPU and memory attributes, OpenShift Virtualization always rejects additional requests for these resources when creating a VM from an instance type. You can manually create an instance type manifest. For example: Example YAML file with required fields apiVersion: instancetype.kubevirt.io/v1beta1 kind: VirtualMachineInstancetype metadata: name: example-instancetype spec: cpu: guest: 1 1 memory: guest: 128Mi 2 1 Required. Specifies the number of vCPUs to allocate to the guest. 2 Required. Specifies an amount of memory to allocate to the guest. You can create an instance type manifest by using the virtctl CLI utility. For example: Example virtctl command with required fields USD virtctl create instancetype --cpu 2 --memory 256Mi where: --cpu <value> Specifies the number of vCPUs to allocate to the guest. Required. --memory <value> Specifies an amount of memory to allocate to the guest. Required. Tip You can immediately create the object from the new manifest by running the following command: USD virtctl create instancetype --cpu 2 --memory 256Mi | oc apply -f - 6.1.1.2. Optional attributes In addition to the required cpu and memory attributes, you can include the following optional attributes in the VirtualMachineInstancetypeSpec : annotations List annotations to apply to the VM. gpus List vGPUs for passthrough. hostDevices List host devices for passthrough. ioThreadsPolicy Define an IO threads policy for managing dedicated disk access. launchSecurity Configure Secure Encrypted Virtualization (SEV). nodeSelector Specify node selectors to control the nodes where this VM is scheduled. schedulerName Define a custom scheduler to use for this VM instead of the default scheduler. 6.1.2. Pre-defined instance types OpenShift Virtualization includes a set of pre-defined instance types called common-instancetypes . Some are specialized for specific workloads and others are workload-agnostic. These instance type resources are named according to their series, version, and size. The size value follows the . delimiter and ranges from nano to 8xlarge . Table 6.1. common-instancetypes series comparison Use case Series Characteristics vCPU to memory ratio Example resource Network N Hugepages Dedicated CPU Isolated emulator threads Requires nodes capable of running DPDK workloads 1:2 n1.medium 4 vCPUs 4GiB Memory Overcommitted O Overcommitted memory Burstable CPU performance 1:4 o1.small 1 vCPU 2GiB Memory Compute Exclusive CX Hugepages Dedicated CPU Isolated emulator threads vNUMA 1:2 cx1.2xlarge 8 vCPUs 16GiB Memory General Purpose U Burstable CPU performance 1:4 u1.medium 1 vCPU 4GiB Memory Memory Intensive M Hugepages Burstable CPU performance 1:8 m1.large 2 vCPUs 16GiB Memory 6.1.3. Specifying an instance type or preference You can specify an instance type, a preference, or both to define a set of workload sizing and runtime characteristics for reuse across multiple VMs. 6.1.3.1. Using flags to specify instance types and preferences Specify instance types and preferences by using flags. Prerequisites You must have an instance type, preference, or both on the cluster. Procedure To specify an instance type when creating a VM, use the --instancetype flag. To specify a preference, use the --preference flag. The following example includes both flags: USD virtctl create vm --instancetype <my_instancetype> --preference <my_preference> Optional: To specify a namespaced instance type or preference, include the kind in the value passed to the --instancetype or --preference flag command. The namespaced instance type or preference must be in the same namespace you are creating the VM in. The following example includes flags for a namespaced instance type and a namespaced preference: USD virtctl create vm --instancetype virtualmachineinstancetype/<my_instancetype> --preference virtualmachinepreference/<my_preference> 6.1.3.2. Inferring an instance type or preference Inferring instance types, preferences, or both is enabled by default, and the inferFromVolumeFailure policy of the inferFromVolume attribute is set to Ignore . When inferring from the boot volume, errors are ignored, and the VM is created with the instance type and preference left unset. However, when flags are applied, the inferFromVolumeFailure policy defaults to Reject . When inferring from the boot volume, errors result in the rejection of the creation of that VM. You can use the --infer-instancetype and --infer-preference flags to infer which instance type, preference, or both to use to define the workload sizing and runtime characteristics of a VM. Prerequisites You have installed the virtctl tool. Procedure To explicitly infer instance types from the volume used to boot the VM, use the --infer-instancetype flag. To explicitly infer preferences, use the --infer-preference flag. The following command includes both flags: USD virtctl create vm --volume-import type:pvc,src:my-ns/my-pvc --infer-instancetype --infer-preference To infer an instance type or preference from a volume other than the volume used to boot the VM, use the --infer-instancetype-from and --infer-preference-from flags to specify any of the virtual machine's volumes. In the example below, the virtual machine boots from volume-a but infers the instancetype and preference from volume-b . USD virtctl create vm \ --volume-import=type:pvc,src:my-ns/my-pvc-a,name:volume-a \ --volume-import=type:pvc,src:my-ns/my-pvc-b,name:volume-b \ --infer-instancetype-from volume-b \ --infer-preference-from volume-b 6.1.3.3. Setting the inferFromVolume labels Use the following labels on your PVC, data source, or data volume to instruct the inference mechanism which instance type, preference, or both to use when trying to boot from a volume. A cluster-wide instance type: instancetype.kubevirt.io/default-instancetype label. A namespaced instance type: instancetype.kubevirt.io/default-instancetype-kind label. Defaults to the VirtualMachineClusterInstancetype label if left empty. A cluster-wide preference: instancetype.kubevirt.io/default-preference label. A namespaced preference: instancetype.kubevirt.io/default-preference-kind label. Defaults to VirtualMachineClusterPreference label, if left empty. Prerequisites You must have an instance type, preference, or both on the cluster. Procedure To apply a label to a data source, use oc label . The following command applies a label that points to a cluster-wide instance type: USD oc label DataSource foo instancetype.kubevirt.io/default-instancetype=<my_instancetype> 6.1.4. Creating a VM from an instance type by using the web console You can create a virtual machine (VM) from an instance type by using the Red Hat OpenShift Service on AWS web console. You can also use the web console to create a VM by copying an existing snapshot or to clone a VM. You can create a VM from a list of available bootable volumes. You can add Linux- or Windows-based volumes to the list. Procedure In the web console, navigate to Virtualization Catalog . The InstanceTypes tab opens by default. Select either of the following options: Select a suitable bootable volume from the list. If the list is truncated, click the Show all button to display the entire list. Note The bootable volume table lists only those volumes in the openshift-virtualization-os-images namespace that have the instancetype.kubevirt.io/default-preference label. Optional: Click the star icon to designate a bootable volume as a favorite. Starred bootable volumes appear first in the volume list. Click Add volume to upload a new volume or to use an existing persistent volume claim (PVC), a volume snapshot, or a containerDisk volume. Click Save . Logos of operating systems that are not available in the cluster are shown at the bottom of the list. You can add a volume for the required operating system by clicking the Add volume link. In addition, there is a link to the Create a Windows boot source quick start. The same link appears in a popover if you hover the pointer over the question mark icon to the Select volume to boot from line. Immediately after you install the environment or when the environment is disconnected, the list of volumes to boot from is empty. In that case, three operating system logos are displayed: Windows, RHEL, and Linux. You can add a new volume that meets your requirements by clicking the Add volume button. Click an instance type tile and select the resource size appropriate for your workload. Optional: Choose the virtual machine details, including the VM's name, that apply to the volume you are booting from: For a Linux-based volume, follow these steps to configure SSH: If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section. Select one of the following options: Use existing : Select a secret from the secrets list. Add new : Follow these steps: Browse to the public SSH key file or paste the file in the key field. Enter the secret name. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project . Click Save . For a Windows volume, follow either of these set of steps to configure sysprep options: If you have not already added sysprep options for the Windows volume, follow these steps: Click the edit icon beside Sysprep in the VirtualMachine details section. Add the Autoattend.xml answer file. Add the Unattend.xml answer file. Click Save . If you want to use existing sysprep options for the Windows volume, follow these steps: Click Attach existing sysprep . Enter the name of the existing sysprep Unattend.xml answer file. Click Save . Optional: If you are creating a Windows VM, you can mount a Windows driver disk: Click the Customize VirtualMachine button. On the VirtualMachine details page, click Storage . Select the Mount Windows drivers disk checkbox. Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands. Click Create VirtualMachine . After the VM is created, you can monitor the status on the VirtualMachine details page. 6.1.5. Changing the instance type of a VM You can change the instance type associated with a running virtual machine (VM) by using the web console. The change takes effect immediately. Prerequisites You created the VM by using an instance type. Procedure In the Red Hat OpenShift Service on AWS web console, click Virtualization VirtualMachines . Select a VM to open the VirtualMachine details page. Click the Configuration tab. On the Details tab, click the instance type text to open the Edit Instancetype dialog. For example, click 1 CPU | 2 GiB Memory . Edit the instance type by using the Series and Size lists. Select an item from the Series list to show the relevant sizes for that series. For example, select General Purpose . Select the VM's new instance type from the Size list. For example, select medium: 1 CPUs, 4Gi Memory , which is available in the General Purpose series. Click Save . Verification Click the YAML tab. Click Reload . Review the VM YAML to confirm that the instance type changed. 6.2. Creating virtual machines from templates You can create virtual machines (VMs) from Red Hat templates by using the Red Hat OpenShift Service on AWS web console. 6.2.1. About VM templates You can use VM templates to help you easily create VMs. Expedite creation with boot sources You can expedite VM creation by using templates that have an available boot source. Templates with a boot source are labeled Available boot source if they do not have a custom label. Templates without a boot source are labeled Boot source required . See Managing automatic boot source updates for details. Customize before starting the VM You can customize the disk source and VM parameters before you start the VM. Note If you copy a VM template with all its labels and annotations, your version of the template is marked as deprecated when a new version of the Scheduling, Scale, and Performance (SSP) Operator is deployed. You can remove this designation. See Removing a deprecated designation from a customized VM template by using the web console . Single-node OpenShift Due to differences in storage behavior, some templates are incompatible with single-node OpenShift. To ensure compatibility, do not set the evictionStrategy field for templates or VMs that use data volumes or storage profiles. 6.2.2. Creating a VM from a template You can create a virtual machine (VM) from a template with an available boot source by using the Red Hat OpenShift Service on AWS web console. You can customize template or VM parameters, such as data sources, Cloud-init, or SSH keys, before you start the VM. You can choose between two views in the web console to create the VM: A virtualization-focused view, which provides a concise list of virtualization-related options at the top of the view A general view, which provides access to the various web console options, including Virtualization Procedure From the Red Hat OpenShift Service on AWS web console, choose your view: For a virtualization-focused view, select Administrator Virtualization Catalog . For a general view, navigate to Virtualization Catalog . Click the Template catalog tab. Click the Boot source available checkbox to filter templates with boot sources. The catalog displays the default templates. Click All templates to view the available templates for your filters. To focus on particular templates, enter the keyword in the Filter by keyword field. Choose a template project from the All projects dropdown menu, or view all projects. Click a template tile to view its details. Optional: If you are using a Windows template, you can mount a Windows driver disk by selecting the Mount Windows drivers disk checkbox. If you do not need to customize the template or VM parameters, click Quick create VirtualMachine to create a VM from the template. If you need to customize the template or VM parameters, do the following: Click Customize VirtualMachine . The Customize and create VirtualMachine page displays the Overview , YAML , Scheduling , Environment , Network interfaces , Disks , Scripts , and Metadata tabs. Click the Scripts tab to edit the parameters that must be set before the VM boots, such as Cloud-init , SSH key , or Sysprep (Windows VM only). Optional: Click the Start this virtualmachine after creation (Always) checkbox. Click Create VirtualMachine . The VirtualMachine details page displays the provisioning status. 6.2.2.1. Removing a deprecated designation from a customized VM template by using the web console You can customize an existing virtual machine (VM) template by modifying the VM or template parameters, such as data sources, cloud-init, or SSH keys, before you start the VM. If you customize a template by copying it and including all of its labels and annotations, the customized template is marked as deprecated when a new version of the Scheduling, Scale, and Performance (SSP) Operator is deployed. You can remove the deprecated designation from the customized template. Procedure Navigate to Virtualization Templates in the web console. From the list of VM templates, click the template marked as deprecated. Click Edit to the pencil icon beside Labels . Remove the following two labels: template.kubevirt.io/type: "base" template.kubevirt.io/version: "version" Click Save . Click the pencil icon beside the number of existing Annotations . Remove the following annotation: template.kubevirt.io/deprecated Click Save . 6.2.2.2. Creating a custom VM template in the web console You create a virtual machine template by editing a YAML file example in the Red Hat OpenShift Service on AWS web console. Procedure In the web console, click Virtualization Templates in the side menu. Optional: Use the Project drop-down menu to change the project associated with the new template. All templates are saved to the openshift project by default. Click Create Template . Specify the template parameters by editing the YAML file. Click Create . The template is displayed on the Templates page. Optional: Click Download to download and save the YAML file. | [
"apiVersion: instancetype.kubevirt.io/v1beta1 kind: VirtualMachineInstancetype metadata: name: example-instancetype spec: cpu: guest: 1 1 memory: guest: 128Mi 2",
"virtctl create instancetype --cpu 2 --memory 256Mi",
"virtctl create instancetype --cpu 2 --memory 256Mi | oc apply -f -",
"virtctl create vm --instancetype <my_instancetype> --preference <my_preference>",
"virtctl create vm --instancetype virtualmachineinstancetype/<my_instancetype> --preference virtualmachinepreference/<my_preference>",
"virtctl create vm --volume-import type:pvc,src:my-ns/my-pvc --infer-instancetype --infer-preference",
"virtctl create vm --volume-import=type:pvc,src:my-ns/my-pvc-a,name:volume-a --volume-import=type:pvc,src:my-ns/my-pvc-b,name:volume-b --infer-instancetype-from volume-b --infer-preference-from volume-b",
"oc label DataSource foo instancetype.kubevirt.io/default-instancetype=<my_instancetype>"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/virtualization/creating-a-virtual-machine |
Chapter 2. Requirements for scaling storage | Chapter 2. Requirements for scaling storage Before you proceed to scale the storage nodes, refer to the following sections to understand the node requirements for your specific Red Hat OpenShift Data Foundation instance: Platform requirements Resource requirements Storage device requirements Dynamic storage devices Local storage devices Capacity planning Important Always ensure that you have plenty of storage capacity. If storage ever fills completely, it is not possible to add capacity or delete or migrate content away from the storage to free up space completely. Full storage is very difficult to recover. Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space. If you do run out of storage space completely, contact Red Hat Customer Support . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/scaling_storage/requirements-for-scaling-storage-nodes |
Chapter 3. Getting started | Chapter 3. Getting started This chapter guides you through the steps to set up your environment and run a simple messaging program. 3.1. Prerequisites To build the example, Maven must be configured to use the Red Hat repository or a local repository . You must install the examples . You must have a message broker listening for connections on localhost . It must have anonymous access enabled. For more information, see Starting the broker . You must have a queue named exampleQueue . For more information, see Creating a queue . 3.2. Running your first example The example creates a consumer and producer for a queue named exampleQueue . It sends a text message and then receives it back, printing the received message to the console. Procedure Use Maven to build the examples by running the following command in the <install-dir> /examples/protocols/openwire/queue directory. USD mvn clean package dependency:copy-dependencies -DincludeScope=runtime -DskipTests The addition of dependency:copy-dependencies results in the dependencies being copied into the target/dependency directory. Use the java command to run the example. On Linux or UNIX: USD java -cp "target/classes:target/dependency/*" org.apache.activemq.artemis.jms.example.QueueExample On Windows: > java -cp "target\classes;target\dependency\*" org.apache.activemq.artemis.jms.example.QueueExample Running it on Linux results in the following output: USD java -cp "target/classes:target/dependency/*" org.apache.activemq.artemis.jms.example.QueueExample Sent message: This is a text message Received message: This is a text message The source code for the example is in the <install-dir> /examples/protocols/openwire/queue/src directory. Additional examples are available in the <install-dir> /examples/protocols/openwire directory. | [
"mvn clean package dependency:copy-dependencies -DincludeScope=runtime -DskipTests",
"java -cp \"target/classes:target/dependency/*\" org.apache.activemq.artemis.jms.example.QueueExample",
"> java -cp \"target\\classes;target\\dependency\\*\" org.apache.activemq.artemis.jms.example.QueueExample",
"java -cp \"target/classes:target/dependency/*\" org.apache.activemq.artemis.jms.example.QueueExample Sent message: This is a text message Received message: This is a text message"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_openwire_jms_client/getting_started |
Chapter 5. Asset execution options with Red Hat Process Automation Manager | Chapter 5. Asset execution options with Red Hat Process Automation Manager After you build and deploy your Red Hat Process Automation Manager project to KIE Server or other environment, you can execute the deployed assets for testing or for runtime consumption. You can also execute assets locally in addition to or instead of executing them after deployment. The following options are the main methods for Red Hat Process Automation Manager asset execution: Table 5.1. Asset execution options Execution option Description Documentation Execution in KIE Server If you deployed Red Hat Process Automation Manager project assets to KIE Server, you can use the KIE Server REST API or Java client API to execute and interact with the deployed assets. You can also use Business Central or the headless Process Automation Manager controller outside of Business Central to manage the configurations and KIE containers in the KIE Server instances associated with your deployed assets. For process definitions, you can use Business Central directly to execute process instances. Interacting with Red Hat Process Automation Manager using KIE APIs Execution in an embedded Java application If you deployed Red Hat Process Automation Manager project assets in your own Java virtual machine (JVM) environment, microservice, or application server, you can use custom APIs or application interactions with core KIE APIs (not KIE Server APIs) to execute assets in the embedded engine. KIE Public API Execution in a local environment for extended testing As part of your development cycle, you can execute assets locally to ensure that the assets you have created in Red Hat Process Automation Manager function as intended. You can use local execution in addition to or instead of executing assets after deployment. "Executing rules" in Designing a decision service using DRL rules Smart Router (KIE Server router) Depending on your deployment and execution environment, you can use a Smart Router to aggregate multiple independent KIE Server instances as though they are a single server. Smart Router is a single endpoint that can receive calls from client applications to any of your services and route each call automatically to the KIE Server that runs the service. For more information about Smart Router, see Installing and configuring Red Hat Process Automation Manager in a Red Hat JBoss EAP clustered environment . | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/designing_your_decision_management_architecture_for_red_hat_process_automation_manager/project-asset-execution-options-ref_decision-management-architecture |
Data Grid documentation | Data Grid documentation Documentation for Data Grid is available on the Red Hat customer portal. Data Grid 8.5 Documentation Data Grid 8.5 Component Details Supported Configurations for Data Grid 8.5 Data Grid 8 Feature Support Data Grid Deprecated Features and Functionality | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_security_guide/rhdg-docs_datagrid |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.