title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 14. Creating exports using NFS | Chapter 14. Creating exports using NFS This section describes how to create exports using NFS that can then be accessed externally from the OpenShift cluster. Follow the instructions below to create exports and access them externally from the OpenShift Cluster: Section 14.1, "Enabling the NFS feature" Section 14.2, "Creating NFS exports" Section 14.3, "Consuming NFS exports in-cluster" Section 14.4, "Consuming NFS exports externally from the OpenShift cluster" 14.1. Enabling the NFS feature In order to use the NFS feature, it needs to be enabled in the cluster. Prerequisites OpenShift Data Foundation is installed and running in the openshift-storage namespace. The OpenShift Data Foundation installation includes a CephFilesystem. Procedure Run the following commands to enable the NFS feature: Verification steps NFS installation and configuration is complete when the following conditions are met: The CephNFS resource named ocs-storagecluster-cephnfs has a status of Ready . Check all csi-nfsplugin-* pods are running: Output will be multiple pods. For example: 14.2. Creating NFS exports NFS exports are created by creating a Persistent Volume Claim (PVC) against the ocs-storagecluster-ceph-nfs StorageClass. You can create NFS PVCs two ways: Create NFS PVC using a yaml. The following is an example PVC. Note volumeMode: Block will not work for NFS volumes. <desired_name> Specify a name for the PVC, for example, my-nfs-export . The export is created once the PVC reaches the Bound state. Create NFS PVCs from the OpenShift Container Platform web console. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and the NFS feature is enabled for the storage cluster. Procedure In the OpenShift Web Console, click Storage Persistent Volume Claims Set the Project to openshift-storage . Click Create PersistentVolumeClaim . Specify Storage Class , ocs-storagecluster-ceph-nfs . Specify the PVC Name , for example, my-nfs-export . Select the required Access Mode . Specify a Size as per application requirement. Select Volume mode as Filesystem . Note: Block mode is not supported for NFS PVCs Click Create and wait until the PVC is in Bound status. 14.3. Consuming NFS exports in-cluster Kubernetes application pods can consume NFS exports created by mounting a previously created PVC. You can mount the PVC one of two ways: Using a YAML: Below is an example pod that uses the example PVC created in Section 14.2, "Creating NFS exports" : <pvc_name> Specify the PVC you have previously created, for example, my-nfs-export . Using the OpenShift Container Platform web console. Procedure On the OpenShift Container Platform web console, navigate to Workloads Pods . Click Create Pod to create a new application pod. Under the metadata section add a name. For example, nfs-export-example , with namespace as openshift-storage . Under the spec: section, add containers: section with image and volumeMounts sections: For example: Under the spec: section, add volumes: section to add the NFS PVC as a volume for the application pod: For example: 14.4. Consuming NFS exports externally from the OpenShift cluster NFS clients outside of the OpenShift cluster can mount NFS exports created by a previously-created PVC. Procedure After the nfs flag is enabled, singe-server CephNFS is deployed by Rook. You need to fetch the value of the ceph_nfs field for the nfs-ganesha server to use in the step: For example: Expose the NFS server outside of the OpenShift cluster by creating a Kubernetes LoadBalancer Service. The example below creates a LoadBalancer Service and references the NFS server created by OpenShift Data Foundation. Replace <my-nfs> with the value you got in step 1. Collect connection information. The information external clients need to connect to an export comes from the Persistent Volume (PV) created for the PVC, and the status of the LoadBalancer Service created in the step. Get the share path from the PV. Get the name of the PV associated with the NFS export's PVC: Replace <pvc_name> with your own PVC name. For example: Use the PV name obtained previously to get the NFS export's share path: Get an ingress address for the NFS server. A service's ingress status may have multiple addresses. Choose the one desired to use for external clients. In the example below, there is only a single address: the host name ingress-id.somedomain.com . Connect the external client using the share path and ingress address from the steps. The following example mounts the export to the client's directory path /export/mount/path : If this does not work immediately, it could be that the Kubernetes environment is still taking time to configure the network resources to allow ingress to the NFS server. | [
"oc --namespace openshift-storage patch storageclusters.ocs.openshift.io ocs-storagecluster --type merge --patch '{\"spec\": {\"nfs\":{\"enable\": true}}}'",
"oc --namespace openshift-storage patch configmap rook-ceph-operator-config --type merge --patch '{\"data\":{\"ROOK_CSI_ENABLE_NFS\": \"true\"}}'",
"-n openshift-storage describe cephnfs ocs-storagecluster-cephnfs",
"-n openshift-storage get pod | grep csi-nfsplugin",
"csi-nfsplugin-47qwq 2/2 Running 0 10s csi-nfsplugin-77947 2/2 Running 0 10s csi-nfsplugin-ct2pm 2/2 Running 0 10s csi-nfsplugin-provisioner-f85b75fbb-2rm2w 2/2 Running 0 10s csi-nfsplugin-provisioner-f85b75fbb-8nj5h 2/2 Running 0 10s",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: <desired_name> spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: ocs-storagecluster-ceph-nfs",
"apiVersion: v1 kind: Pod metadata: name: nfs-export-example spec: containers: - name: web-server image: nginx volumeMounts: - name: nfs-export-pvc mountPath: /var/lib/www/html volumes: - name: nfs-export-pvc persistentVolumeClaim: claimName: <pvc_name> readOnly: false",
"apiVersion: v1 kind: Pod metadata: name: nfs-export-example namespace: openshift-storage spec: containers: - name: web-server image: nginx volumeMounts: - name: <volume_name> mountPath: /var/lib/www/html",
"apiVersion: v1 kind: Pod metadata: name: nfs-export-example namespace: openshift-storage spec: containers: - name: web-server image: nginx volumeMounts: - name: nfs-export-pvc mountPath: /var/lib/www/html",
"volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>",
"volumes: - name: nfs-export-pvc persistentVolumeClaim: claimName: my-nfs-export",
"oc get pods -n openshift-storage | grep rook-ceph-nfs",
"oc describe pod <name of the rook-ceph-nfs pod> | grep ceph_nfs",
"oc describe pod rook-ceph-nfs-ocs-storagecluster-cephnfs-a-7bb484b4bf-bbdhs | grep ceph_nfs ceph_nfs=my-nfs",
"apiVersion: v1 kind: Service metadata: name: rook-ceph-nfs-ocs-storagecluster-cephnfs-load-balancer namespace: openshift-storage spec: ports: - name: nfs port: 2049 type: LoadBalancer externalTrafficPolicy: Local selector: app: rook-ceph-nfs ceph_nfs: <my-nfs> instance: a",
"oc get pvc <pvc_name> --output jsonpath='{.spec.volumeName}' pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d",
"get pvc pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d --output jsonpath='{.spec.volumeName}' pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d",
"oc get pv pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d --output jsonpath='{.spec.csi.volumeAttributes.share}' /0001-0011-openshift-storage-0000000000000001-ba9426ab-d61b-11ec-9ffd-0a580a800215",
"oc -n openshift-storage get service rook-ceph-nfs-ocs-storagecluster-cephnfs-load-balancer --output jsonpath='{.status.loadBalancer.ingress}' [{\"hostname\":\"ingress-id.somedomain.com\"}]",
"mount -t nfs4 -o proto=tcp ingress-id.somedomain.com:/0001-0011-openshift-storage-0000000000000001-ba9426ab-d61b-11ec-9ffd-0a580a800215 /export/mount/path"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/managing_and_allocating_storage_resources/creating-exports-using-nfs_rhodf |
Troubleshooting Collector | Troubleshooting Collector Red Hat Advanced Cluster Security for Kubernetes 4.7 Troubleshooting Collector Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/troubleshooting_collector/index |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate and prioritize your feedback regarding our documentation. Provide as much detail as possible, so that your request can be quickly addressed. Prerequisites You are logged in to the Red Hat Customer Portal. Procedure To provide feedback, perform the following steps: Click the following link: Create Issue Describe the issue or enhancement in the Summary text box. Provide details about the issue or requested enhancement in the Description text box. Type your name in the Reporter text box. Click the Create button. This action creates a documentation ticket and routes it to the appropriate documentation team. Thank you for taking the time to provide feedback. | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/monitoring_and_reacting_to_configuration_changes_using_policies/proc-providing-feedback-on-redhat-documentation |
Chapter 182. JPA Component | Chapter 182. JPA Component Available as of Camel version 1.0 The jpa component enables you to store and retrieve Java objects from persistent storage using EJB 3's Java Persistence Architecture (JPA), which is a standard interface layer that wraps Object/Relational Mapping (ORM) products such as OpenJPA, Hibernate, TopLink, and so on. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jpa</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 182.1. Sending to the endpoint You can store a Java entity bean in a database by sending it to a JPA producer endpoint. The body of the In message is assumed to be an entity bean (that is, a POJO with an @Entity annotation on it) or a collection or array of entity beans. If the body is a List of entities, make sure to use entityType=java.util.ArrayList as a configuration passed to the producer endpoint. If the body does not contain one of the listed types, put a Message Translator in front of the endpoint to perform the necessary conversion first. From Camel 2.19 onwards you can use query , namedQuery or nativeQuery for the producer as well. Also in the value of the parameters , you can use Simple expression which allows you to retrieve parameter values from Message body, header and etc. Those query can be used for retrieving a set of data with using SELECT JPQL/SQL statement as well as executing bulk update/delete with using UPDATE / DELETE JPQL/SQL statement. Please note that you need to specify useExecuteUpdate to true if you execute UPDATE / DELETE with namedQuery as camel don't look into the named query unlike query and nativeQuery . 182.2. Consuming from the endpoint Consuming messages from a JPA consumer endpoint removes (or updates) entity beans in the database. This allows you to use a database table as a logical queue: consumers take messages from the queue and then delete/update them to logically remove them from the queue. If you do not wish to delete the entity bean when it has been processed (and when routing is done), you can specify consumeDelete=false on the URI. This will result in the entity being processed each poll. If you would rather perform some update on the entity to mark it as processed (such as to exclude it from a future query) then you can annotate a method with @Consumed which will be invoked on your entity bean when the entity bean when it has been processed (and when routing is done). From Camel 2.13 onwards you can use @PreConsumed which will be invoked on your entity bean before it has been processed (before routing). If you are consuming a lot (100K+) of rows and experience OutOfMemory problems you should set the maximumResults to sensible value. 182.3. URI format jpa:entityClassName[?options] For sending to the endpoint, the entityClassName is optional. If specified, it helps the Type Converter to ensure the body is of the correct type. For consuming, the entityClassName is mandatory. You can append query options to the URI in the following format, ?option=value&option=value&... 182.4. Options The JPA component supports 5 options, which are listed below. Name Description Default Type entityManagerFactory (common) To use the EntityManagerFactory. This is strongly recommended to configure. EntityManagerFactory transactionManager (common) To use the PlatformTransactionManager for managing transactions. PlatformTransaction Manager joinTransaction (common) The camel-jpa component will join transaction by default. You can use this option to turn this off, for example if you use LOCAL_RESOURCE and join transaction doesn't work with your JPA provider. This option can also be set globally on the JpaComponent, instead of having to set it on all endpoints. true boolean sharedEntityManager (common) Whether to use Spring's SharedEntityManager for the consumer/producer. Note in most cases joinTransaction should be set to false as this is not an EXTENDED EntityManager. false boolean resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The JPA endpoint is configured using URI syntax: with the following path and query parameters: 182.4.1. Path Parameters (1 parameters): Name Description Default Type entityType Required The JPA annotated class to use as entity. Class 182.4.2. Query Parameters (42 parameters): Name Description Default Type joinTransaction (common) The camel-jpa component will join transaction by default. You can use this option to turn this off, for example if you use LOCAL_RESOURCE and join transaction doesn't work with your JPA provider. This option can also be set globally on the JpaComponent, instead of having to set it on all endpoints. true boolean maximumResults (common) Set the maximum number of results to retrieve on the Query. -1 int namedQuery (common) To use a named query. String nativeQuery (common) To use a custom native query. You may want to use the option resultClass also when using native queries. String parameters (common) This key/value mapping is used for building the query parameters. It is expected to be of the generic type java.util.Map where the keys are the named parameters of a given JPA query and the values are their corresponding effective values you want to select for. When it's used for producer, Simple expression can be used as a parameter value. It allows you to retrieve parameter values from the message body, header and etc. Map persistenceUnit (common) Required The JPA persistence unit used by default. camel String query (common) To use a custom query. String resultClass (common) Defines the type of the returned payload (we will call entityManager.createNativeQuery(nativeQuery, resultClass) instead of entityManager.createNativeQuery(nativeQuery)). Without this option, we will return an object array. Only has an affect when using in conjunction with native query when consuming data. Class sharedEntityManager (common) Whether to use Spring's SharedEntityManager for the consumer/producer. Note in most cases joinTransaction should be set to false as this is not an EXTENDED EntityManager. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean consumeDelete (consumer) If true, the entity is deleted after it is consumed; if false, the entity is not deleted. true boolean consumeLockEntity (consumer) Specifies whether or not to set an exclusive lock on each entity bean while processing the results from polling. true boolean deleteHandler (consumer) To use a custom DeleteHandler to delete the row after the consumer is done processing the exchange DeleteHandler lockModeType (consumer) To configure the lock mode on the consumer. PESSIMISTIC_WRITE LockModeType maxMessagesPerPoll (consumer) An integer value to define the maximum number of messages to gather per poll. By default, no maximum is set. Can be used to avoid polling many thousands of messages when starting up the server. Set a value of 0 or negative to disable. int preDeleteHandler (consumer) To use a custom Pre-DeleteHandler to delete the row after the consumer has read the entity. DeleteHandler sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean skipLockedEntity (consumer) To configure whether to use NOWAIT on lock and silently skip the entity. false boolean transacted (consumer) Whether to run the consumer in transacted mode, by which all messages will either commit or rollback, when the entire batch has been processed. The default behavior (false) is to commit all the previously successfully processed messages, and only rollback the last failed message. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern pollStrategy (consumer) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPoll Strategy flushOnSend (producer) Flushes the EntityManager after the entity bean has been persisted. true boolean remove (producer) Indicates to use entityManager.remove(entity). false boolean useExecuteUpdate (producer) To configure whether to use executeUpdate() when producer executes a query. When you use INSERT, UPDATE or DELETE statement as a named query, you need to specify this option to 'true'. Boolean usePassedInEntityManager (producer) If set to true, then Camel will use the EntityManager from the header JpaConstants.ENTITY_MANAGER instead of the configured entity manager on the component/endpoint. This allows end users to control which entity manager will be in use. false boolean usePersist (producer) Indicates to use entityManager.persist(entity) instead of entityManager.merge(entity). Note: entityManager.persist(entity) doesn't work for detached entities (where the EntityManager has to execute an UPDATE instead of an INSERT query)! false boolean entityManagerProperties (advanced) Additional properties for the entity manager to use. Map synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 1000 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutor Service scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz2 component none ScheduledPollConsumer Scheduler schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz2, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 182.5. Spring Boot Auto-Configuration The component supports 6 options, which are listed below. Name Description Default Type camel.component.jpa.enabled Enable jpa component true Boolean camel.component.jpa.entity-manager-factory To use the EntityManagerFactory. This is strongly recommended to configure. The option is a javax.persistence.EntityManagerFactory type. String camel.component.jpa.join-transaction The camel-jpa component will join transaction by default. You can use this option to turn this off, for example if you use LOCAL_RESOURCE and join transaction doesn't work with your JPA provider. This option can also be set globally on the JpaComponent, instead of having to set it on all endpoints. true Boolean camel.component.jpa.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.jpa.shared-entity-manager Whether to use Spring's SharedEntityManager for the consumer/producer. Note in most cases joinTransaction should be set to false as this is not an EXTENDED EntityManager. false Boolean camel.component.jpa.transaction-manager To use the PlatformTransactionManager for managing transactions. The option is a org.springframework.transaction.PlatformTransactionManager type. String 182.6. Message Headers Camel adds the following message headers to the exchange: Header Type Description CamelJpaTemplate JpaTemplate Not supported anymore since Camel 2.12: The JpaTemplate object that is used to access the entity bean. You need this object in some situations, for instance in a type converter or when you are doing some custom processing. See CAMEL-5932 for the reason why the support for this header has been dropped. CamelEntityManager EntityManager Camel 2.12: JPA consumer / Camel 2.12.2: JPA producer: The JPA EntityManager object being used by JpaConsumer or JpaProducer . CamelJpaParameters Map<String, Object> Camel 2.23: JPA producer: Alternative way for passing query parameters as an Exchange header. 182.7. Configuring EntityManagerFactory Its strongly advised to configure the JPA component to use a specific EntityManagerFactory instance. If failed to do so each JpaEndpoint will auto create their own instance of EntityManagerFactory which most often is not what you want. For example, you can instantiate a JPA component that references the myEMFactory entity manager factory, as follows: <bean id="jpa" class="org.apache.camel.component.jpa.JpaComponent"> <property name="entityManagerFactory" ref="myEMFactory"/> </bean> In Camel 2.3 the JpaComponent will auto lookup the EntityManagerFactory from the Registry which means you do not need to configure this on the JpaComponent as shown above. You only need to do so if there is ambiguity, in which case Camel will log a WARN. 182.8. Configuring TransactionManager Since Camel 2.3 the JpaComponent will auto lookup the TransactionManager from the Registry. If Camel won't find any TransactionManager instance registered, it will also look up for the TransactionTemplate and try to extract TransactionManager from it. If none TransactionTemplate is available in the registry, JpaEndpoint will auto create their own instance of TransactionManager which most often is not what you want. If more than single instance of the TransactionManager is found, Camel will log a WARN. In such cases you might want to instantiate and explicitly configure a JPA component that references the myTransactionManager transaction manager, as follows: <bean id="jpa" class="org.apache.camel.component.jpa.JpaComponent"> <property name="entityManagerFactory" ref="myEMFactory"/> <property name="transactionManager" ref="myTransactionManager"/> </bean> 182.9. Using a consumer with a named query For consuming only selected entities, you can use the consumer.namedQuery URI query option. First, you have to define the named query in the JPA Entity class: @Entity @NamedQuery(name = "step1", query = "select x from MultiSteps x where x.step = 1") public class MultiSteps { ... } After that you can define a consumer uri like this one: from("jpa://org.apache.camel.examples.MultiSteps?consumer.namedQuery=step1") .to("bean:myBusinessLogic"); 182.10. Using a consumer with a query For consuming only selected entities, you can use the consumer.query URI query option. You only have to define the query option: from("jpa://org.apache.camel.examples.MultiSteps?consumer.query=select o from org.apache.camel.examples.MultiSteps o where o.step = 1") .to("bean:myBusinessLogic"); 182.11. Using a consumer with a native query For consuming only selected entities, you can use the consumer.nativeQuery URI query option. You only have to define the native query option: from("jpa://org.apache.camel.examples.MultiSteps?consumer.nativeQuery=select * from MultiSteps where step = 1") .to("bean:myBusinessLogic"); If you use the native query option, you will receive an object array in the message body. 182.12. Using a producer with a named query For retrieving selected entities or execute bulk update/delete, you can use the namedQuery URI query option. First, you have to define the named query in the JPA Entity class: @Entity @NamedQuery(name = "step1", query = "select x from MultiSteps x where x.step = 1") public class MultiSteps { ... } After that you can define a producer uri like this one: from("direct:namedQuery") .to("jpa://org.apache.camel.examples.MultiSteps?namedQuery=step1"); Note that you need to specify useExecuteUpdate option to true to execute UPDATE / DELETE statement as a named query. 182.13. Using a producer with a query For retrieving selected entities or execute bulk update/delete, you can use the query URI query option. You only have to define the query option: from("direct:query") .to("jpa://org.apache.camel.examples.MultiSteps?query=select o from org.apache.camel.examples.MultiSteps o where o.step = 1"); 182.14. Using a producer with a native query For retrieving selected entities or execute bulk update/delete, you can use the nativeQuery URI query option. You only have to define the native query option: from("direct:nativeQuery") .to("jpa://org.apache.camel.examples.MultiSteps?resultClass=org.apache.camel.examples.MultiSteps&nativeQuery=select * from MultiSteps where step = 1"); If you use the native query option without specifying resultClass , you will receive an object array in the message body. 182.15. Example See Tracer Example for an example using JPA to store traced messages into a database. 182.16. Using the JPA-Based Idempotent Repository The Idempotent Consumer from the EIP patterns is used to filter out duplicate messages. A JPA-based idempotent repository is provided. To use the JPA based idempotent repository. Procedure Set up a persistence-unit in the persistence.xml file: Set up a org.springframework.orm.jpa.JpaTemplate which is used by the org.apache.camel.processor.idempotent.jpa.JpaMessageIdRepository : Configure the error formatting macro: snippet: java.lang.IndexOutOfBoundsException: Index: 20, Size: 20 Configure the idempotent repository: org.apache.camel.processor.idempotent.jpa.JpaMessageIdRepository : Create the JPA idempotent repository in the Spring XML file: <camelContext xmlns="http://camel.apache.org/schema/spring"> <route id="JpaMessageIdRepositoryTest"> <from uri="direct:start" /> <idempotentConsumer messageIdRepositoryRef="jpaStore"> <header>messageId</header> <to uri="mock:result" /> </idempotentConsumer> </route> </camelContext> When running this Camel component tests inside your IDE If you run the tests of this component directly inside your IDE, and not through Maven, then you could see exceptions like these: org.springframework.transaction.CannotCreateTransactionException: Could not open JPA EntityManager for transaction; nested exception is <openjpa-2.2.1-r422266:1396819 nonfatal user error> org.apache.openjpa.persistence.ArgumentException: This configuration disallows runtime optimization, but the following listed types were not enhanced at build time or at class load time with a javaagent: "org.apache.camel.examples.SendEmail". at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:427) at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:371) at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:127) at org.apache.camel.processor.jpa.JpaRouteTest.cleanupRepository(JpaRouteTest.java:96) at org.apache.camel.processor.jpa.JpaRouteTest.createCamelContext(JpaRouteTest.java:67) at org.apache.camel.test.junit4.CamelTestSupport.doSetUp(CamelTestSupport.java:238) at org.apache.camel.test.junit4.CamelTestSupport.setUp(CamelTestSupport.java:208) The problem here is that the source has been compiled or recompiled through your IDE and not through Maven, which would enhance the byte-code at build time . To overcome this you need to enable dynamic byte-code enhancement of OpenJPA . For example, assuming the current OpenJPA version being used in Camel is 2.2.1, to run the tests inside your IDE you would need to pass the following argument to the JVM: -javaagent:<path_to_your_local_m2_cache>/org/apache/openjpa/openjpa/2.2.1/openjpa-2.2.1.jar 182.17. Using the connection pool in transaction context 182.17.1. Setting the datasource connection pool size When the JpaTransactionManager is used it needs a separate connection to control the transaction. Therefore, you must configure the JDBC connection pool for a capacity limit of at least two JDBC connections. This applies when camel-jpa is combined with transacted() and split(), multicast() or receiptList() Set the datasource connection pool size to at least 2 . 182.17.2. Adding method for the Content Enricher When using the Content Enricher from the EIP patterns with a custom aggregate strategy, you must copy the JpaConstants.ENTITY_MANAGE property from the newExchange to the oldExchange . Add a method to JpaHelper.copyEntityManagers to perform the copy operation: from("direct:enrich") .transacted().enrich("jpa://" + Example.class.getName(), new AggregationStrategy() { @Override public Exchange aggregate(Exchange oldExchange, Exchange newExchange) { JpaHelper.copyEntityManagers(oldExchange, newExchange); return newExchange; } }) .to("jpa://" + Example.class.getName()); 182.18. See Also Configuring Camel Component Endpoint Getting Started Tracer Example | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jpa</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"jpa:entityClassName[?options]",
"jpa:entityType",
"<bean id=\"jpa\" class=\"org.apache.camel.component.jpa.JpaComponent\"> <property name=\"entityManagerFactory\" ref=\"myEMFactory\"/> </bean>",
"<bean id=\"jpa\" class=\"org.apache.camel.component.jpa.JpaComponent\"> <property name=\"entityManagerFactory\" ref=\"myEMFactory\"/> <property name=\"transactionManager\" ref=\"myTransactionManager\"/> </bean>",
"@Entity @NamedQuery(name = \"step1\", query = \"select x from MultiSteps x where x.step = 1\") public class MultiSteps { }",
"from(\"jpa://org.apache.camel.examples.MultiSteps?consumer.namedQuery=step1\") .to(\"bean:myBusinessLogic\");",
"from(\"jpa://org.apache.camel.examples.MultiSteps?consumer.query=select o from org.apache.camel.examples.MultiSteps o where o.step = 1\") .to(\"bean:myBusinessLogic\");",
"from(\"jpa://org.apache.camel.examples.MultiSteps?consumer.nativeQuery=select * from MultiSteps where step = 1\") .to(\"bean:myBusinessLogic\");",
"@Entity @NamedQuery(name = \"step1\", query = \"select x from MultiSteps x where x.step = 1\") public class MultiSteps { }",
"from(\"direct:namedQuery\") .to(\"jpa://org.apache.camel.examples.MultiSteps?namedQuery=step1\");",
"from(\"direct:query\") .to(\"jpa://org.apache.camel.examples.MultiSteps?query=select o from org.apache.camel.examples.MultiSteps o where o.step = 1\");",
"from(\"direct:nativeQuery\") .to(\"jpa://org.apache.camel.examples.MultiSteps?resultClass=org.apache.camel.examples.MultiSteps&nativeQuery=select * from MultiSteps where step = 1\");",
"<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route id=\"JpaMessageIdRepositoryTest\"> <from uri=\"direct:start\" /> <idempotentConsumer messageIdRepositoryRef=\"jpaStore\"> <header>messageId</header> <to uri=\"mock:result\" /> </idempotentConsumer> </route> </camelContext>",
"org.springframework.transaction.CannotCreateTransactionException: Could not open JPA EntityManager for transaction; nested exception is <openjpa-2.2.1-r422266:1396819 nonfatal user error> org.apache.openjpa.persistence.ArgumentException: This configuration disallows runtime optimization, but the following listed types were not enhanced at build time or at class load time with a javaagent: \"org.apache.camel.examples.SendEmail\". at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:427) at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:371) at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:127) at org.apache.camel.processor.jpa.JpaRouteTest.cleanupRepository(JpaRouteTest.java:96) at org.apache.camel.processor.jpa.JpaRouteTest.createCamelContext(JpaRouteTest.java:67) at org.apache.camel.test.junit4.CamelTestSupport.doSetUp(CamelTestSupport.java:238) at org.apache.camel.test.junit4.CamelTestSupport.setUp(CamelTestSupport.java:208)",
"-javaagent:<path_to_your_local_m2_cache>/org/apache/openjpa/openjpa/2.2.1/openjpa-2.2.1.jar",
"from(\"direct:enrich\") .transacted().enrich(\"jpa://\" + Example.class.getName(), new AggregationStrategy() { @Override public Exchange aggregate(Exchange oldExchange, Exchange newExchange) { JpaHelper.copyEntityManagers(oldExchange, newExchange); return newExchange; } }) .to(\"jpa://\" + Example.class.getName());"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/jpa-component |
Chapter 73. service | Chapter 73. service This chapter describes the commands under the service command. 73.1. service create Create new service Usage: Table 73.1. Positional arguments Value Summary <type> New service type (compute, image, identity, volume, etc) Table 73.2. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New service name --description <description> New service description --enable Enable service (default) --disable Disable service Table 73.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 73.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 73.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 73.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 73.2. service delete Delete service(s) Usage: Table 73.7. Positional arguments Value Summary <service> Service(s) to delete (type, name or id) Table 73.8. Command arguments Value Summary -h, --help Show this help message and exit 73.3. service list List services Usage: Table 73.9. Command arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output Table 73.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 73.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 73.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 73.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 73.4. service provider create Create new service provider Usage: Table 73.14. Positional arguments Value Summary <name> New service provider name (must be unique) Table 73.15. Command arguments Value Summary -h, --help Show this help message and exit --auth-url <auth-url> Authentication url of remote federated service provider (required) --description <description> New service provider description --service-provider-url <sp-url> A service url where saml assertions are being sent (required) --enable Enable the service provider (default) --disable Disable the service provider Table 73.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 73.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 73.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 73.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 73.5. service provider delete Delete service provider(s) Usage: Table 73.20. Positional arguments Value Summary <service-provider> Service provider(s) to delete Table 73.21. Command arguments Value Summary -h, --help Show this help message and exit 73.6. service provider list List service providers Usage: Table 73.22. Command arguments Value Summary -h, --help Show this help message and exit Table 73.23. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 73.24. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 73.25. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 73.26. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 73.7. service provider set Set service provider properties Usage: Table 73.27. Positional arguments Value Summary <service-provider> Service provider to modify Table 73.28. Command arguments Value Summary -h, --help Show this help message and exit --auth-url <auth-url> New authentication url of remote federated service provider --description <description> New service provider description --service-provider-url <sp-url> New service provider url, where saml assertions are sent --enable Enable the service provider --disable Disable the service provider 73.8. service provider show Display service provider details Usage: Table 73.29. Positional arguments Value Summary <service-provider> Service provider to display Table 73.30. Command arguments Value Summary -h, --help Show this help message and exit Table 73.31. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 73.32. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 73.33. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 73.34. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 73.9. service set Set service properties Usage: Table 73.35. Positional arguments Value Summary <service> Service to modify (type, name or id) Table 73.36. Command arguments Value Summary -h, --help Show this help message and exit --type <type> New service type (compute, image, identity, volume, etc) --name <service-name> New service name --description <description> New service description --enable Enable service --disable Disable service 73.10. service show Display service details Usage: Table 73.37. Positional arguments Value Summary <service> Service to display (type, name or id) Table 73.38. Command arguments Value Summary -h, --help Show this help message and exit Table 73.39. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 73.40. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 73.41. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 73.42. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack service create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] [--description <description>] [--enable | --disable] <type>",
"openstack service delete [-h] <service> [<service> ...]",
"openstack service list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--long]",
"openstack service provider create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --auth-url <auth-url> [--description <description>] --service-provider-url <sp-url> [--enable | --disable] <name>",
"openstack service provider delete [-h] <service-provider> [<service-provider> ...]",
"openstack service provider list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending]",
"openstack service provider set [-h] [--auth-url <auth-url>] [--description <description>] [--service-provider-url <sp-url>] [--enable | --disable] <service-provider>",
"openstack service provider show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <service-provider>",
"openstack service set [-h] [--type <type>] [--name <service-name>] [--description <description>] [--enable | --disable] <service>",
"openstack service show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <service>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/service |
Chapter 5. Developing Operators | Chapter 5. Developing Operators 5.1. About the Operator SDK The Operator Framework is an open source toolkit to manage Kubernetes native applications, called Operators , in an effective, automated, and scalable way. Operators take advantage of Kubernetes extensibility to deliver the automation advantages of cloud services, like provisioning, scaling, and backup and restore, while being able to run anywhere that Kubernetes can run. Operators make it easy to manage complex, stateful applications on top of Kubernetes. However, writing an Operator today can be difficult because of challenges such as using low-level APIs, writing boilerplate, and a lack of modularity, which leads to duplication. The Operator SDK, a component of the Operator Framework, provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator. Why use the Operator SDK? The Operator SDK simplifies this process of building Kubernetes-native applications, which can require deep, application-specific operational knowledge. The Operator SDK not only lowers that barrier, but it also helps reduce the amount of boilerplate code required for many common management capabilities, such as metering or monitoring. The Operator SDK is a framework that uses the controller-runtime library to make writing Operators easier by providing the following features: High-level APIs and abstractions to write the operational logic more intuitively Tools for scaffolding and code generation to quickly bootstrap a new project Integration with Operator Lifecycle Manager (OLM) to streamline packaging, installing, and running Operators on a cluster Extensions to cover common Operator use cases Metrics set up automatically in any generated Go-based Operator for use on clusters where the Prometheus Operator is deployed Operator authors with cluster administrator access to a Kubernetes-based cluster (such as OpenShift Container Platform) can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, Java, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work. Note OpenShift Container Platform 4.14 supports Operator SDK 1.31.0. 5.1.1. What are Operators? For an overview about basic Operator concepts and terminology, see Understanding Operators . 5.1.2. Development workflow The Operator SDK provides the following workflow to develop a new Operator: Create an Operator project by using the Operator SDK command-line interface (CLI). Define new resource APIs by adding custom resource definitions (CRDs). Specify resources to watch by using the Operator SDK API. Define the Operator reconciling logic in a designated handler and use the Operator SDK API to interact with resources. Use the Operator SDK CLI to build and generate the Operator deployment manifests. Figure 5.1. Operator SDK workflow At a high level, an Operator that uses the Operator SDK processes events for watched resources in an Operator author-defined handler and takes actions to reconcile the state of the application. 5.1.3. Additional resources Certified Operator Build Guide 5.2. Installing the Operator SDK CLI The Operator SDK provides a command-line interface (CLI) tool that Operator developers can use to build, test, and deploy an Operator. You can install the Operator SDK CLI on your workstation so that you are prepared to start authoring your own Operators. Operator authors with cluster administrator access to a Kubernetes-based cluster, such as OpenShift Container Platform, can use the Operator SDK CLI to develop their own Operators based on Go, Ansible, Java, or Helm. Kubebuilder is embedded into the Operator SDK as the scaffolding solution for Go-based Operators, which means existing Kubebuilder projects can be used as is with the Operator SDK and continue to work. Note OpenShift Container Platform 4.14 supports Operator SDK 1.31.0. 5.2.1. Installing the Operator SDK CLI on Linux You can install the OpenShift SDK CLI tool on Linux. Prerequisites Go v1.19+ docker v17.03+, podman v1.9.3+, or buildah v1.7+ Procedure Navigate to the OpenShift mirror site . From the latest 4.14 directory, download the latest version of the tarball for Linux. Unpack the archive: USD tar xvf operator-sdk-v1.31.0-ocp-linux-x86_64.tar.gz Make the file executable: USD chmod +x operator-sdk Move the extracted operator-sdk binary to a directory that is on your PATH . Tip To check your PATH : USD echo USDPATH USD sudo mv ./operator-sdk /usr/local/bin/operator-sdk Verification After you install the Operator SDK CLI, verify that it is available: USD operator-sdk version Example output operator-sdk version: "v1.31.0-ocp", ... 5.2.2. Installing the Operator SDK CLI on macOS You can install the OpenShift SDK CLI tool on macOS. Prerequisites Go v1.19+ docker v17.03+, podman v1.9.3+, or buildah v1.7+ Procedure For the amd64 and arm64 architectures, navigate to the OpenShift mirror site for the amd64 architecture and OpenShift mirror site for the arm64 architecture respectively. From the latest 4.14 directory, download the latest version of the tarball for macOS. Unpack the Operator SDK archive for amd64 architecture by running the following command: USD tar xvf operator-sdk-v1.31.0-ocp-darwin-x86_64.tar.gz Unpack the Operator SDK archive for arm64 architecture by running the following command: USD tar xvf operator-sdk-v1.31.0-ocp-darwin-aarch64.tar.gz Make the file executable by running the following command: USD chmod +x operator-sdk Move the extracted operator-sdk binary to a directory that is on your PATH by running the following command: Tip Check your PATH by running the following command: USD echo USDPATH USD sudo mv ./operator-sdk /usr/local/bin/operator-sdk Verification After you install the Operator SDK CLI, verify that it is available by running the following command:: USD operator-sdk version Example output operator-sdk version: "v1.31.0-ocp", ... 5.3. Go-based Operators 5.3.1. Getting started with Operator SDK for Go-based Operators To demonstrate the basics of setting up and running a Go-based Operator using tools and libraries provided by the Operator SDK, Operator developers can build an example Go-based Operator for Memcached, a distributed key-value store, and deploy it to a cluster. 5.3.1.1. Prerequisites Operator SDK CLI installed OpenShift CLI ( oc ) 4.14+ installed Go 1.21+ Logged into an OpenShift Container Platform 4.14 cluster with oc with an account that has cluster-admin permissions To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret Additional resources Installing the Operator SDK CLI Getting started with the OpenShift CLI 5.3.1.2. Creating and deploying Go-based Operators You can build and deploy a simple Go-based Operator for Memcached by using the Operator SDK. Procedure Create a project. Create your project directory: USD mkdir memcached-operator Change into the project directory: USD cd memcached-operator Run the operator-sdk init command to initialize the project: USD operator-sdk init \ --domain=example.com \ --repo=github.com/example-inc/memcached-operator The command uses the Go plugin by default. Create an API. Create a simple Memcached API: USD operator-sdk create api \ --resource=true \ --controller=true \ --group cache \ --version v1 \ --kind Memcached Build and push the Operator image. Use the default Makefile targets to build and push your Operator. Set IMG with a pull spec for your image that uses a registry you can push to: USD make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag> Run the Operator. Install the CRD: USD make install Deploy the project to the cluster. Set IMG to the image that you pushed: USD make deploy IMG=<registry>/<user>/<image_name>:<tag> Create a sample custom resource (CR). Create a sample CR: USD oc apply -f config/samples/cache_v1_memcached.yaml \ -n memcached-operator-system Watch for the CR to reconcile the Operator: USD oc logs deployment.apps/memcached-operator-controller-manager \ -c manager \ -n memcached-operator-system Delete a CR. Delete a CR by running the following command: USD oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system Clean up. Run the following command to clean up the resources that have been created as part of this procedure: USD make undeploy 5.3.1.3. steps See Operator SDK tutorial for Go-based Operators for a more in-depth walkthrough on building a Go-based Operator. 5.3.2. Operator SDK tutorial for Go-based Operators Operator developers can take advantage of Go programming language support in the Operator SDK to build an example Go-based Operator for Memcached, a distributed key-value store, and manage its lifecycle. This process is accomplished using two centerpieces of the Operator Framework: Operator SDK The operator-sdk CLI tool and controller-runtime library API Operator Lifecycle Manager (OLM) Installation, upgrade, and role-based access control (RBAC) of Operators on a cluster Note This tutorial goes into greater detail than Getting started with Operator SDK for Go-based Operators . 5.3.2.1. Prerequisites Operator SDK CLI installed OpenShift CLI ( oc ) 4.14+ installed Go 1.21+ Logged into an OpenShift Container Platform 4.14 cluster with oc with an account that has cluster-admin permissions To allow the cluster to pull the image, the repository where you push your image must be set as public, or you must configure an image pull secret Additional resources Installing the Operator SDK CLI Getting started with the OpenShift CLI 5.3.2.2. Creating a project Use the Operator SDK CLI to create a project called memcached-operator . Procedure Create a directory for the project: USD mkdir -p USDHOME/projects/memcached-operator Change to the directory: USD cd USDHOME/projects/memcached-operator Activate support for Go modules: USD export GO111MODULE=on Run the operator-sdk init command to initialize the project: USD operator-sdk init \ --domain=example.com \ --repo=github.com/example-inc/memcached-operator Note The operator-sdk init command uses the Go plugin by default. The operator-sdk init command generates a go.mod file to be used with Go modules . The --repo flag is required when creating a project outside of USDGOPATH/src/ , because generated files require a valid module path. 5.3.2.2.1. PROJECT file Among the files generated by the operator-sdk init command is a Kubebuilder PROJECT file. Subsequent operator-sdk commands, as well as help output, that are run from the project root read this file and are aware that the project type is Go. For example: domain: example.com layout: - go.kubebuilder.io/v3 projectName: memcached-operator repo: github.com/example-inc/memcached-operator version: "3" plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} 5.3.2.2.2. About the Manager The main program for the Operator is the main.go file, which initializes and runs the Manager . The Manager automatically registers the Scheme for all custom resource (CR) API definitions and sets up and runs controllers and webhooks. The Manager can restrict the namespace that all controllers watch for resources: mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace}) By default, the Manager watches the namespace where the Operator runs. To watch all namespaces, you can leave the namespace option empty: mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: ""}) You can also use the MultiNamespacedCacheBuilder function to watch a specific set of namespaces: var namespaces []string 1 mgr, err := ctrl.NewManager(cfg, manager.Options{ 2 NewCache: cache.MultiNamespacedCacheBuilder(namespaces), }) 1 List of namespaces. 2 Creates a Cmd struct to provide shared dependencies and start components. 5.3.2.2.3. About multi-group APIs Before you create an API and controller, consider whether your Operator requires multiple API groups. This tutorial covers the default case of a single group API, but to change the layout of your project to support multi-group APIs, you can run the following command: USD operator-sdk edit --multigroup=true This command updates the PROJECT file, which should look like the following example: domain: example.com layout: go.kubebuilder.io/v3 multigroup: true ... For multi-group projects, the API Go type files are created in the apis/<group>/<version>/ directory, and the controllers are created in the controllers/<group>/ directory. The Dockerfile is then updated accordingly. Additional resource For more details on migrating to a multi-group project, see the Kubebuilder documentation . 5.3.2.3. Creating an API and controller Use the Operator SDK CLI to create a custom resource definition (CRD) API and controller. Procedure Run the following command to create an API with group cache , version, v1 , and kind Memcached : USD operator-sdk create api \ --group=cache \ --version=v1 \ --kind=Memcached When prompted, enter y for creating both the resource and controller: Create Resource [y/n] y Create Controller [y/n] y Example output Writing scaffold for you to edit... api/v1/memcached_types.go controllers/memcached_controller.go ... This process generates the Memcached resource API at api/v1/memcached_types.go and the controller at controllers/memcached_controller.go . 5.3.2.3.1. Defining the API Define the API for the Memcached custom resource (CR). Procedure Modify the Go type definitions at api/v1/memcached_types.go to have the following spec and status : // MemcachedSpec defines the desired state of Memcached type MemcachedSpec struct { // +kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:"size"` } // MemcachedStatus defines the observed state of Memcached type MemcachedStatus struct { // Nodes are the names of the memcached pods Nodes []string `json:"nodes"` } Update the generated code for the resource type: USD make generate Tip After you modify a *_types.go file, you must run the make generate command to update the generated code for that resource type. The above Makefile target invokes the controller-gen utility to update the api/v1/zz_generated.deepcopy.go file. This ensures your API Go type definitions implement the runtime.Object interface that all Kind types must implement. 5.3.2.3.2. Generating CRD manifests After the API is defined with spec and status fields and custom resource definition (CRD) validation markers, you can generate CRD manifests. Procedure Run the following command to generate and update CRD manifests: USD make manifests This Makefile target invokes the controller-gen utility to generate the CRD manifests in the config/crd/bases/cache.example.com_memcacheds.yaml file. 5.3.2.3.2.1. About OpenAPI validation OpenAPIv3 schemas are added to CRD manifests in the spec.validation block when the manifests are generated. This validation block allows Kubernetes to validate the properties in a Memcached custom resource (CR) when it is created or updated. Markers, or annotations, are available to configure validations for your API. These markers always have a +kubebuilder:validation prefix. Additional resources For more details on the usage of markers in API code, see the following Kubebuilder documentation: CRD generation Markers List of OpenAPIv3 validation markers For more details about OpenAPIv3 validation schemas in CRDs, see the Kubernetes documentation . 5.3.2.4. Implementing the controller After creating a new API and controller, you can implement the controller logic. Procedure For this example, replace the generated controller file controllers/memcached_controller.go with following example implementation: Example 5.1. Example memcached_controller.go /* | [
"tar xvf operator-sdk-v1.31.0-ocp-linux-x86_64.tar.gz",
"chmod +x operator-sdk",
"echo USDPATH",
"sudo mv ./operator-sdk /usr/local/bin/operator-sdk",
"operator-sdk version",
"operator-sdk version: \"v1.31.0-ocp\",",
"tar xvf operator-sdk-v1.31.0-ocp-darwin-x86_64.tar.gz",
"tar xvf operator-sdk-v1.31.0-ocp-darwin-aarch64.tar.gz",
"chmod +x operator-sdk",
"echo USDPATH",
"sudo mv ./operator-sdk /usr/local/bin/operator-sdk",
"operator-sdk version",
"operator-sdk version: \"v1.31.0-ocp\",",
"mkdir memcached-operator",
"cd memcached-operator",
"operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator",
"operator-sdk create api --resource=true --controller=true --group cache --version v1 --kind Memcached",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system",
"oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"export GO111MODULE=on",
"operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator",
"domain: example.com layout: - go.kubebuilder.io/v3 projectName: memcached-operator repo: github.com/example-inc/memcached-operator version: \"3\" plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {}",
"mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})",
"mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: \"\"})",
"var namespaces []string 1 mgr, err := ctrl.NewManager(cfg, manager.Options{ 2 NewCache: cache.MultiNamespacedCacheBuilder(namespaces), })",
"operator-sdk edit --multigroup=true",
"domain: example.com layout: go.kubebuilder.io/v3 multigroup: true",
"operator-sdk create api --group=cache --version=v1 --kind=Memcached",
"Create Resource [y/n] y Create Controller [y/n] y",
"Writing scaffold for you to edit api/v1/memcached_types.go controllers/memcached_controller.go",
"// MemcachedSpec defines the desired state of Memcached type MemcachedSpec struct { // +kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:\"size\"` } // MemcachedStatus defines the observed state of Memcached type MemcachedStatus struct { // Nodes are the names of the memcached pods Nodes []string `json:\"nodes\"` }",
"make generate",
"make manifests",
"/* Copyright 2020. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package controllers import ( appsv1 \"k8s.io/api/apps/v1\" corev1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/types\" \"reflect\" \"context\" \"github.com/go-logr/logr\" \"k8s.io/apimachinery/pkg/runtime\" ctrl \"sigs.k8s.io/controller-runtime\" \"sigs.k8s.io/controller-runtime/pkg/client\" ctrllog \"sigs.k8s.io/controller-runtime/pkg/log\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) // MemcachedReconciler reconciles a Memcached object type MemcachedReconciler struct { client.Client Log logr.Logger Scheme *runtime.Scheme } // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; // Reconcile is part of the main kubernetes reconciliation loop which aims to // move the current state of the cluster closer to the desired state. // TODO(user): Modify the Reconcile function to compare the state specified by // the Memcached object against the actual cluster state, and then // perform operations to make the cluster state reflect the state specified by // the user. // // For more details, check Reconcile and its Result here: // - https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/reconcile func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { //log := r.Log.WithValues(\"memcached\", req.NamespacedName) log := ctrllog.FromContext(ctx) // Fetch the Memcached instance memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) if err != nil { if errors.IsNotFound(err) { // Request object not found, could have been deleted after reconcile request. // Owned objects are automatically garbage collected. For additional cleanup logic use finalizers. // Return and don't requeue log.Info(\"Memcached resource not found. Ignoring since object must be deleted\") return ctrl.Result{}, nil } // Error reading the object - requeue the request. log.Error(err, \"Failed to get Memcached\") return ctrl.Result{}, err } // Check if the deployment already exists, if not create a new one found := &appsv1.Deployment{} err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found) if err != nil && errors.IsNotFound(err) { // Define a new deployment dep := r.deploymentForMemcached(memcached) log.Info(\"Creating a new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) err = r.Create(ctx, dep) if err != nil { log.Error(err, \"Failed to create new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) return ctrl.Result{}, err } // Deployment created successfully - return and requeue return ctrl.Result{Requeue: true}, nil } else if err != nil { log.Error(err, \"Failed to get Deployment\") return ctrl.Result{}, err } // Ensure the deployment size is the same as the spec size := memcached.Spec.Size if *found.Spec.Replicas != size { found.Spec.Replicas = &size err = r.Update(ctx, found) if err != nil { log.Error(err, \"Failed to update Deployment\", \"Deployment.Namespace\", found.Namespace, \"Deployment.Name\", found.Name) return ctrl.Result{}, err } // Spec updated - return and requeue return ctrl.Result{Requeue: true}, nil } // Update the Memcached status with the pod names // List the pods for this memcached's deployment podList := &corev1.PodList{} listOpts := []client.ListOption{ client.InNamespace(memcached.Namespace), client.MatchingLabels(labelsForMemcached(memcached.Name)), } if err = r.List(ctx, podList, listOpts...); err != nil { log.Error(err, \"Failed to list pods\", \"Memcached.Namespace\", memcached.Namespace, \"Memcached.Name\", memcached.Name) return ctrl.Result{}, err } podNames := getPodNames(podList.Items) // Update status.Nodes if needed if !reflect.DeepEqual(podNames, memcached.Status.Nodes) { memcached.Status.Nodes = podNames err := r.Status().Update(ctx, memcached) if err != nil { log.Error(err, \"Failed to update Memcached status\") return ctrl.Result{}, err } } return ctrl.Result{}, nil } // deploymentForMemcached returns a memcached Deployment object func (r *MemcachedReconciler) deploymentForMemcached(m *cachev1.Memcached) *appsv1.Deployment { ls := labelsForMemcached(m.Name) replicas := m.Spec.Size dep := &appsv1.Deployment{ ObjectMeta: metav1.ObjectMeta{ Name: m.Name, Namespace: m.Namespace, }, Spec: appsv1.DeploymentSpec{ Replicas: &replicas, Selector: &metav1.LabelSelector{ MatchLabels: ls, }, Template: corev1.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: ls, }, Spec: corev1.PodSpec{ Containers: []corev1.Container{{ Image: \"memcached:1.4.36-alpine\", Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{ ContainerPort: 11211, Name: \"memcached\", }}, }}, }, }, }, } // Set Memcached instance as the owner and controller ctrl.SetControllerReference(m, dep, r.Scheme) return dep } // labelsForMemcached returns the labels for selecting the resources // belonging to the given memcached CR name. func labelsForMemcached(name string) map[string]string { return map[string]string{\"app\": \"memcached\", \"memcached_cr\": name} } // getPodNames returns the pod names of the array of pods passed in func getPodNames(pods []corev1.Pod) []string { var podNames []string for _, pod := range pods { podNames = append(podNames, pod.Name) } return podNames } // SetupWithManager sets up the controller with the Manager. func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }",
"import ( appsv1 \"k8s.io/api/apps/v1\" ) func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }",
"func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). WithOptions(controller.Options{ MaxConcurrentReconciles: 2, }). Complete(r) }",
"import ( ctrl \"sigs.k8s.io/controller-runtime\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Lookup the Memcached instance for this reconcile request memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) }",
"// Reconcile successful - don't requeue return ctrl.Result{}, nil // Reconcile failed due to error - requeue return ctrl.Result{}, err // Requeue for any reason other than an error return ctrl.Result{Requeue: true}, nil",
"import \"time\" // Reconcile for any reason other than an error after 5 seconds return ctrl.Result{RequeueAfter: time.Second*5}, nil",
"// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { }",
"import ( \"github.com/operator-framework/operator-lib/proxy\" )",
"for i, container := range dep.Spec.Template.Spec.Containers { dep.Spec.Template.Spec.Containers[i].Env = append(container.Env, proxy.ReadProxyVarsFromEnv()...) }",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make install run",
"2021-01-10T21:09:29.016-0700 INFO controller-runtime.metrics metrics server is starting to listen {\"addr\": \":8080\"} 2021-01-10T21:09:29.017-0700 INFO setup starting manager 2021-01-10T21:09:29.017-0700 INFO controller-runtime.manager starting metrics server {\"path\": \"/metrics\"} 2021-01-10T21:09:29.018-0700 INFO controller-runtime.manager.controller.memcached Starting EventSource {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\", \"source\": \"kind source: /, Kind=\"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting Controller {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\"} 2021-01-10T21:09:29.218-0700 INFO controller-runtime.manager.controller.memcached Starting workers {\"reconciler group\": \"cache.example.com\", \"reconciler kind\": \"Memcached\", \"worker count\": 1}",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"oc project memcached-operator-system",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get memcached/memcached-sample -o yaml",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7",
"oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m",
"oc delete -f config/samples/cache_v1_memcached.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0-ocp",
"mkdir memcached-operator",
"cd memcached-operator",
"operator-sdk init --plugins=ansible --domain=example.com",
"operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system",
"I0205 17:48:45.881666 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612547325.8819902,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612547325.98242,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612547325.9824686,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4} {\"level\":\"info\",\"ts\":1612547348.8311093,\"logger\":\"runner\",\"msg\":\"Ansible-runner exited successfully\",\"job\":\"4037200794235010051\",\"name\":\"memcached-sample\",\"namespace\":\"memcached-operator-system\"}",
"oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"operator-sdk init --plugins=ansible --domain=example.com",
"domain: example.com layout: - ansible.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: memcached-operator version: \"3\"",
"operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1",
"--- - name: start memcached k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211",
"--- defaults file for Memcached size: 1",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: labels: app.kubernetes.io/name: memcached app.kubernetes.io/instance: memcached-sample app.kubernetes.io/part-of: memcached-operator app.kubernetes.io/managed-by: kustomize app.kubernetes.io/created-by: memcached-operator name: memcached-sample spec: size: 3",
"env: - name: HTTP_PROXY value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}' - name: http_proxy value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}'",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make install run",
"{\"level\":\"info\",\"ts\":1612589622.7888272,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612589622.7897573,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612589622.789971,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612589622.7899997,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612589622.8904517,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612589622.8905244,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"oc project memcached-operator-system",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get memcached/memcached-sample -o yaml",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7",
"oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m",
"oc delete -f config/samples/cache_v1_memcached.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"FROM quay.io/operator-framework/ansible-operator:v1.31.0",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0-ocp",
"collections: - - name: community.kubernetes 1 - version: \"2.0.1\" - name: operator_sdk.util - version: \"0.4.0\" + version: \"0.5.0\" 2 - name: kubernetes.core version: \"2.4.0\" - name: cloud.common",
"--- dependency: name: galaxy driver: name: delegated - lint: | - set -e - yamllint -d \"{extends: relaxed, rules: {line-length: {max: 120}}}\" . platforms: - name: cluster groups: - k8s provisioner: name: ansible - lint: | - set -e ansible-lint inventory: group_vars: all: namespace: USD{TEST_OPERATOR_NAMESPACE:-osdk-test} host_vars: localhost: ansible_python_interpreter: '{{ ansible_playbook_python }}' config_dir: USD{MOLECULE_PROJECT_DIRECTORY}/config samples_dir: USD{MOLECULE_PROJECT_DIRECTORY}/config/samples operator_image: USD{OPERATOR_IMAGE:-\"\"} operator_pull_policy: USD{OPERATOR_PULL_POLICY:-\"Always\"} kustomize: USD{KUSTOMIZE_PATH:-kustomize} env: K8S_AUTH_KUBECONFIG: USD{KUBECONFIG:-\"~/.kube/config\"} verifier: name: ansible - lint: | - set -e - ansible-lint",
"apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"",
"- version: v1alpha1 1 group: test1.example.com kind: Test1 role: /opt/ansible/roles/Test1 - version: v1alpha1 2 group: test2.example.com kind: Test2 playbook: /opt/ansible/playbook.yml - version: v1alpha1 3 group: test3.example.com kind: Test3 playbook: /opt/ansible/test3.yml reconcilePeriod: 0 manageStatus: false",
"- version: v1alpha1 group: app.example.com kind: AppService playbook: /opt/ansible/playbook.yml maxRunnerArtifacts: 30 reconcilePeriod: 5s manageStatus: False watchDependentResources: False",
"apiVersion: \"app.example.com/v1alpha1\" kind: \"Database\" metadata: name: \"example\" spec: message: \"Hello world 2\" newParameter: \"newParam\"",
"{ \"meta\": { \"name\": \"<cr_name>\", \"namespace\": \"<cr_namespace>\", }, \"message\": \"Hello world 2\", \"new_parameter\": \"newParam\", \"_app_example_com_database\": { <full_crd> }, }",
"--- - debug: msg: \"name: {{ ansible_operator_meta.name }}, {{ ansible_operator_meta.namespace }}\"",
"sudo dnf install ansible",
"pip install kubernetes",
"ansible-galaxy collection install community.kubernetes",
"ansible-galaxy collection install -r requirements.yml",
"--- - name: set ConfigMap example-config to {{ state }} community.kubernetes.k8s: api_version: v1 kind: ConfigMap name: example-config namespace: <operator_namespace> 1 state: \"{{ state }}\" ignore_errors: true 2",
"--- state: present",
"--- - hosts: localhost roles: - <kind>",
"ansible-playbook playbook.yml",
"[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to present] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0",
"oc get configmaps",
"NAME DATA AGE example-config 0 2m1s",
"ansible-playbook playbook.yml --extra-vars state=absent",
"[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to absent] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0",
"oc get configmaps",
"apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"",
"make install",
"/usr/bin/kustomize build config/crd | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created",
"make run",
"/home/user/memcached-operator/bin/ansible-operator run {\"level\":\"info\",\"ts\":1612739145.2871568,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612739148.347306,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612739148.3488882,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612739148.3490262,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612739148.3490646,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612739148.350217,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612739148.3506632,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612739148.350784,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612739148.5511978,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612739148.5512562,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"apiVersion: <group>.example.com/v1alpha1 kind: <kind> metadata: name: \"<kind>-sample\"",
"oc apply -f config/samples/<gvk>.yaml",
"oc get configmaps",
"NAME STATUS AGE example-config Active 3s",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: state: absent",
"oc apply -f config/samples/<gvk>.yaml",
"oc get configmap",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"oc logs deployment/<project_name>-controller-manager -c manager \\ 1 -n <namespace> 2",
"{\"level\":\"info\",\"ts\":1612732105.0579333,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612732105.0587437,\"logger\":\"cmd\",\"msg\":\"WATCH_NAMESPACE environment variable not set. Watching all namespaces.\",\"Namespace\":\"\"} I0207 21:08:26.110949 7 request.go:645] Throttling request took 1.035521578s, request: GET:https://172.30.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1alpha1?timeout=32s {\"level\":\"info\",\"ts\":1612732107.768025,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\"127.0.0.1:8080\"} {\"level\":\"info\",\"ts\":1612732107.768796,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612732107.7688773,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612732107.7688901,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612732107.770032,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} I0207 21:08:27.770185 7 leaderelection.go:243] attempting to acquire leader lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.770202,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} I0207 21:08:27.784854 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.7850506,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612732107.8853772,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612732107.8854098,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4}",
"containers: - name: manager env: - name: ANSIBLE_DEBUG_LOGS value: \"True\"",
"apiVersion: \"cache.example.com/v1alpha1\" kind: \"Memcached\" metadata: name: \"example-memcached\" annotations: \"ansible.sdk.operatorframework.io/verbosity\": \"4\" spec: size: 4",
"status: conditions: - ansibleResult: changed: 3 completion: 2018-12-03T13:45:57.13329 failures: 1 ok: 6 skipped: 0 lastTransitionTime: 2018-12-03T13:45:57Z message: 'Status code was -1 and not [200]: Request failed: <urlopen error [Errno 113] No route to host>' reason: Failed status: \"True\" type: Failure - lastTransitionTime: 2018-12-03T13:46:13Z message: Running reconciliation reason: Running status: \"True\" type: Running",
"- version: v1 group: api.example.com kind: <kind> role: <role> manageStatus: false",
"- operator_sdk.util.k8s_status: api_version: app.example.com/v1 kind: <kind> name: \"{{ ansible_operator_meta.name }}\" namespace: \"{{ ansible_operator_meta.namespace }}\" status: test: data",
"collections: - operator_sdk.util",
"k8s_status: status: key1: value1",
"mkdir nginx-operator",
"cd nginx-operator",
"operator-sdk init --plugins=helm",
"operator-sdk create api --group demo --version v1 --kind Nginx",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample",
"oc apply -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system",
"oc logs deployment.apps/nginx-operator-controller-manager -c manager -n nginx-operator-system",
"oc delete -f config/samples/demo_v1_nginx.yaml -n nginx-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/nginx-operator",
"cd USDHOME/projects/nginx-operator",
"operator-sdk init --plugins=helm --domain=example.com --group=demo --version=v1 --kind=Nginx",
"operator-sdk init --plugins helm --help",
"domain: example.com layout: - helm.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: nginx-operator resources: - api: crdVersion: v1 namespaced: true domain: example.com group: demo kind: Nginx version: v1 version: \"3\"",
"Use the 'create api' subcommand to add watches to this file. - group: demo version: v1 kind: Nginx chart: helm-charts/nginx +kubebuilder:scaffold:watch",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2 service: port: 8080",
"- group: demo.example.com version: v1alpha1 kind: Nginx chart: helm-charts/nginx overrideValues: proxy.http: USDHTTP_PROXY",
"proxy: http: \"\" https: \"\" no_proxy: \"\"",
"containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}\" imagePullPolicy: {{ .Values.image.pullPolicy }} env: - name: http_proxy value: \"{{ .Values.proxy.http }}\"",
"containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"",
"make install run",
"{\"level\":\"info\",\"ts\":1612652419.9289865,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612652419.9296563,\"logger\":\"helm.controller\",\"msg\":\"Watching resource\",\"apiVersion\":\"demo.example.com/v1\",\"kind\":\"Nginx\",\"namespace\":\"\",\"reconcilePeriod\":\"1m0s\"} {\"level\":\"info\",\"ts\":1612652419.929983,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612652419.930015,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: demo.example.com/v1, Kind=Nginx\"} {\"level\":\"info\",\"ts\":1612652420.2307851,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612652420.2309358,\"logger\":\"controller-runtime.manager.controller.nginx-controller\",\"msg\":\"Starting workers\",\"worker count\":8}",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"oc project nginx-operator-system",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3",
"oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample",
"oc apply -f config/samples/demo_v1_nginx.yaml",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 8m nginx-sample 3/3 3 3 1m",
"oc get pods",
"NAME READY STATUS RESTARTS AGE nginx-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m nginx-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m nginx-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m",
"oc get nginx/nginx-sample -o yaml",
"apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3 status: nodes: - nginx-sample-6fd7c98d8-7dqdr - nginx-sample-6fd7c98d8-g5k7v - nginx-sample-6fd7c98d8-m7vn7",
"oc patch nginx nginx-sample -p '{\"spec\":{\"replicaCount\": 5}}' --type=merge",
"oc get deployments",
"NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 10m nginx-sample 5/5 5 5 3m",
"oc delete -f config/samples/demo_v1_nginx.yaml",
"make undeploy",
"operator-sdk cleanup <project_name>",
"FROM quay.io/operator-framework/helm-operator:v1.31.0 1",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0-ocp",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <operator_name>-admin subjects: - kind: ServiceAccount name: <operator_name> namespace: <operator_namespace> roleRef: kind: ClusterRole name: cluster-admin apiGroup: \"\" rules: 1 - apiGroups: - \"\" resources: - secrets verbs: - watch",
"apiVersion: apache.org/v1alpha1 kind: Tomcat metadata: name: example-app spec: replicaCount: 2",
"{{ .Values.replicaCount }}",
"oc get Tomcats --all-namespaces",
"mkdir -p USDHOME/github.com/example/memcached-operator",
"cd USDHOME/github.com/example/memcached-operator",
"operator-sdk init --plugins=hybrid.helm.sdk.operatorframework.io --project-version=\"3\" --domain my.domain --repo=github.com/example/memcached-operator",
"operator-sdk create api --plugins helm.sdk.operatorframework.io/v1 --group cache --version v1 --kind Memcached",
"operator-sdk create api --plugins helm.sdk.operatorframework.io/v1 --help",
"Use the 'create api' subcommand to add watches to this file. - group: cache.my.domain version: v1 kind: Memcached chart: helm-charts/memcached #+kubebuilder:scaffold:watch",
"// Operator's main.go // With the help of helpers provided in the library, the reconciler can be // configured here before starting the controller with this reconciler. reconciler := reconciler.New( reconciler.WithChart(*chart), reconciler.WithGroupVersionKind(gvk), ) if err := reconciler.SetupWithManager(mgr); err != nil { panic(fmt.Sprintf(\"unable to create reconciler: %s\", err)) }",
"operator-sdk create api --group=cache --version v1 --kind MemcachedBackup --resource --controller --plugins=go/v3",
"Create Resource [y/n] y Create Controller [y/n] y",
"// MemcachedBackupSpec defines the desired state of MemcachedBackup type MemcachedBackupSpec struct { // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster // Important: Run \"make\" to regenerate code after modifying this file //+kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:\"size\"` } // MemcachedBackupStatus defines the observed state of MemcachedBackup type MemcachedBackupStatus struct { // INSERT ADDITIONAL STATUS FIELD - define observed state of cluster // Important: Run \"make\" to regenerate code after modifying this file // Nodes are the names of the memcached pods Nodes []string `json:\"nodes\"` }",
"make generate",
"make manifests",
"for _, w := range ws { // Register controller with the factory reconcilePeriod := defaultReconcilePeriod if w.ReconcilePeriod != nil { reconcilePeriod = w.ReconcilePeriod.Duration } maxConcurrentReconciles := defaultMaxConcurrentReconciles if w.MaxConcurrentReconciles != nil { maxConcurrentReconciles = *w.MaxConcurrentReconciles } r, err := reconciler.New( reconciler.WithChart(*w.Chart), reconciler.WithGroupVersionKind(w.GroupVersionKind), reconciler.WithOverrideValues(w.OverrideValues), reconciler.SkipDependentWatches(w.WatchDependentResources != nil && !*w.WatchDependentResources), reconciler.WithMaxConcurrentReconciles(maxConcurrentReconciles), reconciler.WithReconcilePeriod(reconcilePeriod), reconciler.WithInstallAnnotations(annotation.DefaultInstallAnnotations...), reconciler.WithUpgradeAnnotations(annotation.DefaultUpgradeAnnotations...), reconciler.WithUninstallAnnotations(annotation.DefaultUninstallAnnotations...), )",
"// Setup manager with Go API if err = (&controllers.MemcachedBackupReconciler{ Client: mgr.GetClient(), Scheme: mgr.GetScheme(), }).SetupWithManager(mgr); err != nil { setupLog.Error(err, \"unable to create controller\", \"controller\", \"MemcachedBackup\") os.Exit(1) } // Setup manager with Helm API for _, w := range ws { if err := r.SetupWithManager(mgr); err != nil { setupLog.Error(err, \"unable to create controller\", \"controller\", \"Helm\") os.Exit(1) } setupLog.Info(\"configured watch\", \"gvk\", w.GroupVersionKind, \"chartPath\", w.ChartPath, \"maxConcurrentReconciles\", maxConcurrentReconciles, \"reconcilePeriod\", reconcilePeriod) } // Start the manager if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil { setupLog.Error(err, \"problem running manager\") os.Exit(1) }",
"--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: manager-role rules: - apiGroups: - \"\" resources: - namespaces verbs: - get - apiGroups: - apps resources: - deployments - daemonsets - replicasets - statefulsets verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups/finalizers verbs: - create - delete - get - list - patch - update - watch - apiGroups: - \"\" resources: - pods - services - services/finalizers - endpoints - persistentvolumeclaims - events - configmaps - secrets - serviceaccounts verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcachedbackups/status verbs: - get - patch - update - apiGroups: - policy resources: - events - poddisruptionbudgets verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cache.my.domain resources: - memcacheds - memcacheds/status - memcacheds/finalizers verbs: - create - delete - get - list - patch - update - watch",
"make install run",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc get deployment -n <project_name>-system",
"NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m",
"oc project <project_name>-system",
"apiVersion: cache.my.domain/v1 kind: Memcached metadata: name: memcached-sample spec: # Default values copied from <project_dir>/helm-charts/memcached/values.yaml affinity: {} autoscaling: enabled: false maxReplicas: 100 minReplicas: 1 targetCPUUtilizationPercentage: 80 fullnameOverride: \"\" image: pullPolicy: IfNotPresent repository: nginx tag: \"\" imagePullSecrets: [] ingress: annotations: {} className: \"\" enabled: false hosts: - host: chart-example.local paths: - path: / pathType: ImplementationSpecific tls: [] nameOverride: \"\" nodeSelector: {} podAnnotations: {} podSecurityContext: {} replicaCount: 3 resources: {} securityContext: {} service: port: 80 type: ClusterIP serviceAccount: annotations: {} create: true name: \"\" tolerations: []",
"oc apply -f config/samples/cache_v1_memcached.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 18m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 18m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 18m",
"apiVersion: cache.my.domain/v1 kind: MemcachedBackup metadata: name: memcachedbackup-sample spec: size: 2",
"oc apply -f config/samples/cache_v1_memcachedbackup.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE memcachedbackup-sample-8649699989-4bbzg 1/1 Running 0 22m memcachedbackup-sample-8649699989-mq6mx 1/1 Running 0 22m",
"oc delete -f config/samples/cache_v1_memcached.yaml",
"oc delete -f config/samples/cache_v1_memcachedbackup.yaml",
"make undeploy",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0-ocp",
"mkdir memcached-operator",
"cd memcached-operator",
"operator-sdk init --plugins=quarkus --domain=example.com --project-name=memcached-operator",
"operator-sdk create api --plugins quarkus --group cache --version v1 --kind Memcached",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"oc logs deployment.apps/memcached-operator-controller-manager -c manager -n memcached-operator-system",
"oc delete -f config/samples/cache_v1_memcached.yaml -n memcached-operator-system",
"make undeploy",
"mkdir -p USDHOME/projects/memcached-operator",
"cd USDHOME/projects/memcached-operator",
"operator-sdk init --plugins=quarkus --domain=example.com --project-name=memcached-operator",
"domain: example.com layout: - quarkus.javaoperatorsdk.io/v1-alpha projectName: memcached-operator version: \"3\"",
"operator-sdk create api --plugins=quarkus \\ 1 --group=cache \\ 2 --version=v1 \\ 3 --kind=Memcached 4",
"tree",
". ├── Makefile ├── PROJECT ├── pom.xml └── src └── main ├── java │ └── com │ └── example │ ├── Memcached.java │ ├── MemcachedReconciler.java │ ├── MemcachedSpec.java │ └── MemcachedStatus.java └── resources └── application.properties 6 directories, 8 files",
"public class MemcachedSpec { private Integer size; public Integer getSize() { return size; } public void setSize(Integer size) { this.size = size; } }",
"import java.util.ArrayList; import java.util.List; public class MemcachedStatus { // Add Status information here // Nodes are the names of the memcached pods private List<String> nodes; public List<String> getNodes() { if (nodes == null) { nodes = new ArrayList<>(); } return nodes; } public void setNodes(List<String> nodes) { this.nodes = nodes; } }",
"@Version(\"v1\") @Group(\"cache.example.com\") public class Memcached extends CustomResource<MemcachedSpec, MemcachedStatus> implements Namespaced {}",
"mvn clean install",
"cat target/kubernetes/memcacheds.cache.example.com-v1.yaml",
"Generated by Fabric8 CRDGenerator, manual edits might get overwritten! apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: memcacheds.cache.example.com spec: group: cache.example.com names: kind: Memcached plural: memcacheds singular: memcached scope: Namespaced versions: - name: v1 schema: openAPIV3Schema: properties: spec: properties: size: type: integer type: object status: properties: nodes: items: type: string type: array type: object type: object served: true storage: true subresources: status: {}",
"apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: # Add spec fields here size: 1",
"<dependency> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> <version>3.2.2</version> </dependency>",
"package com.example; import io.fabric8.kubernetes.client.KubernetesClient; import io.javaoperatorsdk.operator.api.reconciler.Context; import io.javaoperatorsdk.operator.api.reconciler.Reconciler; import io.javaoperatorsdk.operator.api.reconciler.UpdateControl; import io.fabric8.kubernetes.api.model.ContainerBuilder; import io.fabric8.kubernetes.api.model.ContainerPortBuilder; import io.fabric8.kubernetes.api.model.LabelSelectorBuilder; import io.fabric8.kubernetes.api.model.ObjectMetaBuilder; import io.fabric8.kubernetes.api.model.OwnerReferenceBuilder; import io.fabric8.kubernetes.api.model.Pod; import io.fabric8.kubernetes.api.model.PodSpecBuilder; import io.fabric8.kubernetes.api.model.PodTemplateSpecBuilder; import io.fabric8.kubernetes.api.model.apps.Deployment; import io.fabric8.kubernetes.api.model.apps.DeploymentBuilder; import io.fabric8.kubernetes.api.model.apps.DeploymentSpecBuilder; import org.apache.commons.collections.CollectionUtils; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.stream.Collectors; public class MemcachedReconciler implements Reconciler<Memcached> { private final KubernetesClient client; public MemcachedReconciler(KubernetesClient client) { this.client = client; } // TODO Fill in the rest of the reconciler @Override public UpdateControl<Memcached> reconcile( Memcached resource, Context context) { // TODO: fill in logic Deployment deployment = client.apps() .deployments() .inNamespace(resource.getMetadata().getNamespace()) .withName(resource.getMetadata().getName()) .get(); if (deployment == null) { Deployment newDeployment = createMemcachedDeployment(resource); client.apps().deployments().create(newDeployment); return UpdateControl.noUpdate(); } int currentReplicas = deployment.getSpec().getReplicas(); int requiredReplicas = resource.getSpec().getSize(); if (currentReplicas != requiredReplicas) { deployment.getSpec().setReplicas(requiredReplicas); client.apps().deployments().createOrReplace(deployment); return UpdateControl.noUpdate(); } List<Pod> pods = client.pods() .inNamespace(resource.getMetadata().getNamespace()) .withLabels(labelsForMemcached(resource)) .list() .getItems(); List<String> podNames = pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList()); if (resource.getStatus() == null || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) { if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus()); resource.getStatus().setNodes(podNames); return UpdateControl.updateResource(resource); } return UpdateControl.noUpdate(); } private Map<String, String> labelsForMemcached(Memcached m) { Map<String, String> labels = new HashMap<>(); labels.put(\"app\", \"memcached\"); labels.put(\"memcached_cr\", m.getMetadata().getName()); return labels; } private Deployment createMemcachedDeployment(Memcached m) { Deployment deployment = new DeploymentBuilder() .withMetadata( new ObjectMetaBuilder() .withName(m.getMetadata().getName()) .withNamespace(m.getMetadata().getNamespace()) .build()) .withSpec( new DeploymentSpecBuilder() .withReplicas(m.getSpec().getSize()) .withSelector( new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build()) .withTemplate( new PodTemplateSpecBuilder() .withMetadata( new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build()) .withSpec( new PodSpecBuilder() .withContainers( new ContainerBuilder() .withImage(\"memcached:1.4.36-alpine\") .withName(\"memcached\") .withCommand(\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\") .withPorts( new ContainerPortBuilder() .withContainerPort(11211) .withName(\"memcached\") .build()) .build()) .build()) .build()) .build()) .build(); deployment.addOwnerReference(m); return deployment; } }",
"Deployment deployment = client.apps() .deployments() .inNamespace(resource.getMetadata().getNamespace()) .withName(resource.getMetadata().getName()) .get();",
"if (deployment == null) { Deployment newDeployment = createMemcachedDeployment(resource); client.apps().deployments().create(newDeployment); return UpdateControl.noUpdate(); }",
"int currentReplicas = deployment.getSpec().getReplicas(); int requiredReplicas = resource.getSpec().getSize();",
"if (currentReplicas != requiredReplicas) { deployment.getSpec().setReplicas(requiredReplicas); client.apps().deployments().createOrReplace(deployment); return UpdateControl.noUpdate(); }",
"List<Pod> pods = client.pods() .inNamespace(resource.getMetadata().getNamespace()) .withLabels(labelsForMemcached(resource)) .list() .getItems(); List<String> podNames = pods.stream().map(p -> p.getMetadata().getName()).collect(Collectors.toList());",
"if (resource.getStatus() == null || !CollectionUtils.isEqualCollection(podNames, resource.getStatus().getNodes())) { if (resource.getStatus() == null) resource.setStatus(new MemcachedStatus()); resource.getStatus().setNodes(podNames); return UpdateControl.updateResource(resource); }",
"private Map<String, String> labelsForMemcached(Memcached m) { Map<String, String> labels = new HashMap<>(); labels.put(\"app\", \"memcached\"); labels.put(\"memcached_cr\", m.getMetadata().getName()); return labels; }",
"private Deployment createMemcachedDeployment(Memcached m) { Deployment deployment = new DeploymentBuilder() .withMetadata( new ObjectMetaBuilder() .withName(m.getMetadata().getName()) .withNamespace(m.getMetadata().getNamespace()) .build()) .withSpec( new DeploymentSpecBuilder() .withReplicas(m.getSpec().getSize()) .withSelector( new LabelSelectorBuilder().withMatchLabels(labelsForMemcached(m)).build()) .withTemplate( new PodTemplateSpecBuilder() .withMetadata( new ObjectMetaBuilder().withLabels(labelsForMemcached(m)).build()) .withSpec( new PodSpecBuilder() .withContainers( new ContainerBuilder() .withImage(\"memcached:1.4.36-alpine\") .withName(\"memcached\") .withCommand(\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\") .withPorts( new ContainerPortBuilder() .withContainerPort(11211) .withName(\"memcached\") .build()) .build()) .build()) .build()) .build()) .build(); deployment.addOwnerReference(m); return deployment; }",
"mvn clean install",
"[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 11.193 s [INFO] Finished at: 2021-05-26T12:16:54-04:00 [INFO] ------------------------------------------------------------------------",
"oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml",
"customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: memcached-operator-admin subjects: - kind: ServiceAccount name: memcached-quarkus-operator-operator namespace: <operator_namespace> roleRef: kind: ClusterRole name: cluster-admin apiGroup: \"\"",
"oc apply -f rbac.yaml",
"java -jar target/quarkus-app/quarkus-run.jar",
"kubectl apply -f memcached-sample.yaml",
"memcached.cache.example.com/memcached-sample created",
"oc get all",
"NAME READY STATUS RESTARTS AGE pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s",
"make docker-build IMG=<registry>/<user>/<image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f target/kubernetes/memcacheds.cache.example.com-v1.yml",
"customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: memcached-operator-admin subjects: - kind: ServiceAccount name: memcached-quarkus-operator-operator namespace: <operator_namespace> roleRef: kind: ClusterRole name: cluster-admin apiGroup: \"\"",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"oc apply -f rbac.yaml",
"oc get all -n default",
"NAME READY UP-TO-DATE AVAILABLE AGE pod/memcached-quarkus-operator-operator-7db86ccf58-k4mlm 0/1 Running 0 18s",
"oc apply -f memcached-sample.yaml",
"memcached.cache.example.com/memcached-sample created",
"oc get all",
"NAME READY STATUS RESTARTS AGE pod/memcached-quarkus-operator-operator-7b766f4896-kxnzt 1/1 Running 1 79s pod/memcached-sample-6c765df685-mfqnz 1/1 Running 0 18s",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.31.0-ocp",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: features.operators.openshift.io/disconnected: \"true\" features.operators.openshift.io/fips-compliant: \"false\" features.operators.openshift.io/proxy-aware: \"false\" features.operators.openshift.io/tls-profiles: \"false\" features.operators.openshift.io/token-auth-aws: \"false\" features.operators.openshift.io/token-auth-azure: \"false\" features.operators.openshift.io/token-auth-gcp: \"false\"",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/infrastructure-features: '[\"disconnected\", \"proxy-aware\"]'",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/valid-subscription: '[\"OpenShift Container Platform\"]'",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/valid-subscription: '[\"3Scale Commercial License\", \"Red Hat Managed Integration\"]'",
"spec: spec: containers: - command: - /manager env: - name: <related_image_environment_variable> 1 value: \"<related_image_reference_with_tag>\" 2",
"// deploymentForMemcached returns a memcached Deployment object Spec: corev1.PodSpec{ Containers: []corev1.Container{{ - Image: \"memcached:1.4.36-alpine\", 1 + Image: os.Getenv(\"<related_image_environment_variable>\"), 2 Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{",
"spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v - image: \"docker.io/memcached:1.4.36-alpine\" 1 + image: \"{{ lookup('env', '<related_image_environment_variable>') }}\" 2 ports: - containerPort: 11211",
"- group: demo.example.com version: v1alpha1 kind: Memcached chart: helm-charts/memcached overrideValues: 1 relatedImage: USD{<related_image_environment_variable>} 2",
"relatedImage: \"\"",
"containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.pullPolicy }} env: 1 - name: related_image 2 value: \"{{ .Values.relatedImage }}\" 3",
"BUNDLE_GEN_FLAGS ?= -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) # USE_IMAGE_DIGESTS defines if images are resolved via tags or digests # You can enable this value if you would like to use SHA Based Digests # To enable set flag to true USE_IMAGE_DIGESTS ?= false ifeq (USD(USE_IMAGE_DIGESTS), true) BUNDLE_GEN_FLAGS += --use-image-digests endif - USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) 1 + USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle USD(BUNDLE_GEN_FLAGS) 2",
"make bundle USE_IMAGE_DIGESTS=true",
"metadata: annotations: operators.openshift.io/infrastructure-features: '[\"disconnected\"]'",
"labels: operatorframework.io/arch.<arch>: supported 1 operatorframework.io/os.<os>: supported 2",
"labels: operatorframework.io/os.linux: supported",
"labels: operatorframework.io/arch.amd64: supported",
"labels: operatorframework.io/arch.s390x: supported operatorframework.io/os.zos: supported operatorframework.io/os.linux: supported 1 operatorframework.io/arch.amd64: supported 2",
"metadata: annotations: operatorframework.io/suggested-namespace: <namespace> 1",
"metadata: annotations: operatorframework.io/suggested-namespace-template: 1 { \"apiVersion\": \"v1\", \"kind\": \"Namespace\", \"metadata\": { \"name\": \"vertical-pod-autoscaler-suggested-template\", \"annotations\": { \"openshift.io/node-selector\": \"\" } } }",
"module github.com/example-inc/memcached-operator go 1.19 require ( k8s.io/apimachinery v0.26.0 k8s.io/client-go v0.26.0 sigs.k8s.io/controller-runtime v0.14.1 operator-framework/operator-lib v0.11.0 )",
"import ( apiv1 \"github.com/operator-framework/api/pkg/operators/v1\" ) func NewUpgradeable(cl client.Client) (Condition, error) { return NewCondition(cl, \"apiv1.OperatorUpgradeable\") } cond, err := NewUpgradeable(cl);",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: webhook-operator.v0.0.1 spec: customresourcedefinitions: owned: - kind: WebhookTest name: webhooktests.webhook.operators.coreos.io 1 version: v1 install: spec: deployments: - name: webhook-operator-webhook strategy: deployment installModes: - supported: false type: OwnNamespace - supported: false type: SingleNamespace - supported: false type: MultiNamespace - supported: true type: AllNamespaces webhookdefinitions: - type: ValidatingAdmissionWebhook 2 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: vwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /validate-webhook-operators-coreos-io-v1-webhooktest - type: MutatingAdmissionWebhook 3 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: mwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /mutate-webhook-operators-coreos-io-v1-webhooktest - type: ConversionWebhook 4 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook generateName: cwebhooktest.kb.io sideEffects: None webhookPath: /convert conversionCRDs: - webhooktests.webhook.operators.coreos.io 5",
"- displayName: MongoDB Standalone group: mongodb.com kind: MongoDbStandalone name: mongodbstandalones.mongodb.com resources: - kind: Service name: '' version: v1 - kind: StatefulSet name: '' version: v1beta2 - kind: Pod name: '' version: v1 - kind: ConfigMap name: '' version: v1 specDescriptors: - description: Credentials for Ops Manager or Cloud Manager. displayName: Credentials path: credentials x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret' - description: Project this deployment belongs to. displayName: Project path: project x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap' - description: MongoDB version to be installed. displayName: Version path: version x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:label' statusDescriptors: - description: The status of each of the pods for the MongoDB cluster. displayName: Pod Status path: pods x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:podStatuses' version: v1 description: >- MongoDB Deployment consisting of only one host. No replication of data.",
"required: - name: etcdclusters.etcd.database.coreos.com version: v1beta2 kind: EtcdCluster displayName: etcd Cluster description: Represents a cluster of etcd nodes.",
"versions: - name: v1alpha1 served: true storage: false - name: v1beta1 1 served: true storage: true",
"customresourcedefinitions: owned: - name: cluster.example.com version: v1beta1 1 kind: cluster displayName: Cluster",
"versions: - name: v1alpha1 served: false 1 storage: true",
"versions: - name: v1alpha1 served: false storage: false 1 - name: v1beta1 served: true storage: true 2",
"versions: - name: v1beta1 served: true storage: true",
"metadata: annotations: alm-examples: >- [{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdCluster\",\"metadata\":{\"name\":\"example\",\"namespace\":\"<operator_namespace>\"},\"spec\":{\"size\":3,\"version\":\"3.2.13\"}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdRestore\",\"metadata\":{\"name\":\"example-etcd-cluster\"},\"spec\":{\"etcdCluster\":{\"name\":\"example-etcd-cluster\"},\"backupStorageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdBackup\",\"metadata\":{\"name\":\"example-etcd-cluster-backup\"},\"spec\":{\"etcdEndpoints\":[\"<etcd-cluster-endpoints>\"],\"storageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}}]",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operators.operatorframework.io/internal-objects: '[\"my.internal.crd1.io\",\"my.internal.crd2.io\"]' 1",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operatorframework.io/initialization-resource: |- { \"apiVersion\": \"ocs.openshift.io/v1\", \"kind\": \"StorageCluster\", \"metadata\": { \"name\": \"example-storagecluster\" }, \"spec\": { \"manageNodes\": false, \"monPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"10Gi\" } }, \"storageClassName\": \"gp2\" } }, \"storageDeviceSets\": [ { \"count\": 3, \"dataPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"1Ti\" } }, \"storageClassName\": \"gp2\", \"volumeMode\": \"Block\" } }, \"name\": \"example-deviceset\", \"placement\": {}, \"portable\": true, \"resources\": {} } ] } }",
"make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>",
"make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>",
"docker push <registry>/<user>/<bundle_image_name>:<tag>",
"operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3",
"make catalog-build CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>",
"make catalog-push CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>",
"make bundle-build bundle-push catalog-build catalog-push BUNDLE_IMG=<bundle_image_pull_spec> CATALOG_IMG=<index_image_pull_spec>",
"IMAGE_TAG_BASE=quay.io/example/my-operator",
"make bundle-build bundle-push catalog-build catalog-push",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: cs-memcached namespace: <operator_namespace> spec: displayName: My Test publisher: Company sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: quay.io/example/memcached-catalog:v0.0.1 2 updateStrategy: registryPoll: interval: 10m",
"oc get catalogsource",
"NAME DISPLAY TYPE PUBLISHER AGE cs-memcached My Test grpc Company 4h31m",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-test namespace: <operator_namespace> spec: targetNamespaces: - <operator_namespace>",
"\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: catalogtest namespace: <catalog_namespace> spec: channel: \"alpha\" installPlanApproval: Manual name: catalog source: cs-memcached sourceNamespace: <operator_namespace> startingCSV: memcached-operator.v0.0.1",
"oc get og",
"NAME AGE my-test 4h40m",
"oc get csv",
"NAME DISPLAY VERSION REPLACES PHASE memcached-operator.v0.0.1 Test 0.0.1 Succeeded",
"oc get pods",
"NAME READY STATUS RESTARTS AGE 9098d908802769fbde8bd45255e69710a9f8420a8f3d814abe88b68f8ervdj6 0/1 Completed 0 4h33m catalog-controller-manager-7fd5b7b987-69s4n 2/2 Running 0 4h32m cs-memcached-7622r 1/1 Running 0 4h33m",
"operator-sdk run bundle <registry>/<user>/memcached-operator:v0.0.1",
"INFO[0006] Creating a File-Based Catalog of the bundle \"quay.io/demo/memcached-operator:v0.0.1\" INFO[0008] Generated a valid File-Based Catalog INFO[0012] Created registry pod: quay-io-demo-memcached-operator-v1-0-1 INFO[0012] Created CatalogSource: memcached-operator-catalog INFO[0012] OperatorGroup \"operator-sdk-og\" created INFO[0012] Created Subscription: memcached-operator-v0-0-1-sub INFO[0015] Approved InstallPlan install-h9666 for the Subscription: memcached-operator-v0-0-1-sub INFO[0015] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" to reach 'Succeeded' phase INFO[0015] Waiting for ClusterServiceVersion \"\"my-project/memcached-operator.v0.0.1\" to appear INFO[0026] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Pending INFO[0028] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Installing INFO[0059] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Succeeded INFO[0059] OLM has successfully installed \"memcached-operator.v0.0.1\"",
"operator-sdk run bundle-upgrade <registry>/<user>/memcached-operator:v0.0.2",
"INFO[0002] Found existing subscription with name memcached-operator-v0-0-1-sub and namespace my-project INFO[0002] Found existing catalog source with name memcached-operator-catalog and namespace my-project INFO[0008] Generated a valid Upgraded File-Based Catalog INFO[0009] Created registry pod: quay-io-demo-memcached-operator-v0-0-2 INFO[0009] Updated catalog source memcached-operator-catalog with address and annotations INFO[0010] Deleted previous registry pod with name \"quay-io-demo-memcached-operator-v0-0-1\" INFO[0041] Approved InstallPlan install-gvcjh for the Subscription: memcached-operator-v0-0-1-sub INFO[0042] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" to reach 'Succeeded' phase INFO[0019] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Pending INFO[0042] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: InstallReady INFO[0043] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Installing INFO[0044] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Succeeded INFO[0044] Successfully upgraded to \"memcached-operator.v0.0.2\"",
"operator-sdk cleanup memcached-operator",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: \"olm.properties\": '[{\"type\": \"olm.maxOpenShiftVersion\", \"value\": \"<cluster_version>\"}]' 1",
"com.redhat.openshift.versions: \"v4.7-v4.9\" 1",
"LABEL com.redhat.openshift.versions=\"<versions>\" 1",
"spec: securityContext: seccompProfile: type: RuntimeDefault 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL",
"spec: securityContext: 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL",
"containers: - name: my-container securityContext: allowPrivilegeEscalation: false capabilities: add: - \"NET_ADMIN\"",
"install: spec: clusterPermissions: - rules: - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use serviceAccountName: default",
"spec: apiservicedefinitions:{} description: The <operator_name> requires a privileged pod security admission label set on the Operator's namespace. The Operator's agents require escalated permissions to restart the node if the node needs remediation.",
"install: spec: clusterPermissions: - rules: - apiGroups: - \"cloudcredential.openshift.io\" resources: - credentialsrequests verbs: - create - delete - get - list - patch - update - watch",
"metadata: annotations: features.operators.openshift.io/token-auth-aws: \"true\"",
"// Get ENV var roleARN := os.Getenv(\"ROLEARN\") setupLog.Info(\"getting role ARN\", \"role ARN = \", roleARN) webIdentityTokenPath := \"/var/run/secrets/openshift/serviceaccount/token\"",
"import ( minterv1 \"github.com/openshift/cloud-credential-operator/pkg/apis/cloudcredential/v1\" corev1 \"k8s.io/api/core/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" ) var in = minterv1.AWSProviderSpec{ StatementEntries: []minterv1.StatementEntry{ { Action: []string{ \"s3:*\", }, Effect: \"Allow\", Resource: \"arn:aws:s3:*:*:*\", }, }, STSIAMRoleARN: \"<role_arn>\", } var codec = minterv1.Codec var ProviderSpec, _ = codec.EncodeProviderSpec(in.DeepCopyObject()) const ( name = \"<credential_request_name>\" namespace = \"<namespace_name>\" ) var CredentialsRequestTemplate = &minterv1.CredentialsRequest{ ObjectMeta: metav1.ObjectMeta{ Name: name, Namespace: \"openshift-cloud-credential-operator\", }, Spec: minterv1.CredentialsRequestSpec{ ProviderSpec: ProviderSpec, SecretRef: corev1.ObjectReference{ Name: \"<secret_name>\", Namespace: namespace, }, ServiceAccountNames: []string{ \"<service_account_name>\", }, CloudTokenPath: \"\", }, }",
"// CredentialsRequest is a struct that represents a request for credentials type CredentialsRequest struct { APIVersion string `yaml:\"apiVersion\"` Kind string `yaml:\"kind\"` Metadata struct { Name string `yaml:\"name\"` Namespace string `yaml:\"namespace\"` } `yaml:\"metadata\"` Spec struct { SecretRef struct { Name string `yaml:\"name\"` Namespace string `yaml:\"namespace\"` } `yaml:\"secretRef\"` ProviderSpec struct { APIVersion string `yaml:\"apiVersion\"` Kind string `yaml:\"kind\"` StatementEntries []struct { Effect string `yaml:\"effect\"` Action []string `yaml:\"action\"` Resource string `yaml:\"resource\"` } `yaml:\"statementEntries\"` STSIAMRoleARN string `yaml:\"stsIAMRoleARN\"` } `yaml:\"providerSpec\"` // added new field CloudTokenPath string `yaml:\"cloudTokenPath\"` } `yaml:\"spec\"` } // ConsumeCredsRequestAddingTokenInfo is a function that takes a YAML filename and two strings as arguments // It unmarshals the YAML file to a CredentialsRequest object and adds the token information. func ConsumeCredsRequestAddingTokenInfo(fileName, tokenString, tokenPath string) (*CredentialsRequest, error) { // open a file containing YAML form of a CredentialsRequest file, err := os.Open(fileName) if err != nil { return nil, err } defer file.Close() // create a new CredentialsRequest object cr := &CredentialsRequest{} // decode the yaml file to the object decoder := yaml.NewDecoder(file) err = decoder.Decode(cr) if err != nil { return nil, err } // assign the string to the existing field in the object cr.Spec.CloudTokenPath = tokenPath // return the modified object return cr, nil }",
"// apply credentialsRequest on install credReq := credreq.CredentialsRequestTemplate credReq.Spec.CloudTokenPath = webIdentityTokenPath c := mgr.GetClient() if err := c.Create(context.TODO(), credReq); err != nil { if !errors.IsAlreadyExists(err) { setupLog.Error(err, \"unable to create CredRequest\") os.Exit(1) } }",
"// WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 \"k8s.io/api/core/v1\" name as arguments // It waits until the secret object with the given name exists in the given namespace // It returns the secret object or an error if the timeout is exceeded func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) { // set a timeout of 10 minutes timeout := time.After(10 * time.Minute) 1 // set a polling interval of 10 seconds ticker := time.NewTicker(10 * time.Second) // loop until the timeout or the secret is found for { select { case <-timeout: // timeout is exceeded, return an error return nil, fmt.Errorf(\"timed out waiting for secret %s in namespace %s\", name, namespace) // add to this error with a pointer to instructions for following a manual path to a Secret that will work on STS case <-ticker.C: // polling interval is reached, try to get the secret secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { // secret does not exist yet, continue waiting continue } else { // some other error occurred, return it return nil, err } } else { // secret is found, return it return secret, nil } } } }",
"func SharedCredentialsFileFromSecret(secret *corev1.Secret) (string, error) { var data []byte switch { case len(secret.Data[\"credentials\"]) > 0: data = secret.Data[\"credentials\"] default: return \"\", errors.New(\"invalid secret for aws credentials\") } f, err := ioutil.TempFile(\"\", \"aws-shared-credentials\") if err != nil { return \"\", errors.Wrap(err, \"failed to create file for shared credentials\") } defer f.Close() if _, err := f.Write(data); err != nil { return \"\", errors.Wrapf(err, \"failed to write credentials to %s\", f.Name()) } return f.Name(), nil }",
"sharedCredentialsFile, err := SharedCredentialsFileFromSecret(secret) if err != nil { // handle error } options := session.Options{ SharedConfigState: session.SharedConfigEnable, SharedConfigFiles: []string{sharedCredentialsFile}, }",
"#!/bin/bash set -x AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query \"Account\" --output text) OIDC_PROVIDER=USD(oc get authentication cluster -ojson | jq -r .spec.serviceAccountIssuer | sed -e \"s/^https:\\/\\///\") NAMESPACE=my-namespace SERVICE_ACCOUNT_NAME=\"my-service-account\" POLICY_ARN_STRINGS=\"arn:aws:iam::aws:policy/AmazonS3FullAccess\" read -r -d '' TRUST_RELATIONSHIP <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_PROVIDER}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME}\" } } } ] } EOF echo \"USD{TRUST_RELATIONSHIP}\" > trust.json aws iam create-role --role-name \"USDSERVICE_ACCOUNT_NAME\" --assume-role-policy-document file://trust.json --description \"role for demo\" while IFS= read -r POLICY_ARN; do echo -n \"Attaching USDPOLICY_ARN ... \" aws iam attach-role-policy --role-name \"USDSERVICE_ACCOUNT_NAME\" --policy-arn \"USD{POLICY_ARN}\" echo \"ok.\" done <<< \"USDPOLICY_ARN_STRINGS\"",
"oc exec operator-pod -n <namespace_name> -- cat /var/run/secrets/openshift/serviceaccount/token",
"oc exec operator-pod -n <namespace_name> -- cat /<path>/<to>/<secret_name> 1",
"aws sts assume-role-with-web-identity --role-arn USDROLEARN --role-session-name <session_name> --web-identity-token USDTOKEN",
"operator-sdk scorecard <bundle_dir_or_image> [flags]",
"operator-sdk scorecard -h",
"./bundle └── tests └── scorecard └── config.yaml",
"kind: Configuration apiversion: scorecard.operatorframework.io/v1alpha3 metadata: name: config stages: - parallel: true tests: - image: quay.io/operator-framework/scorecard-test:v1.31.0 entrypoint: - scorecard-test - basic-check-spec labels: suite: basic test: basic-check-spec-test - image: quay.io/operator-framework/scorecard-test:v1.31.0 entrypoint: - scorecard-test - olm-bundle-validation labels: suite: olm test: olm-bundle-validation-test",
"make bundle",
"operator-sdk scorecard <bundle_dir_or_image>",
"{ \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"kind\": \"TestList\", \"items\": [ { \"kind\": \"Test\", \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"spec\": { \"image\": \"quay.io/operator-framework/scorecard-test:v1.31.0\", \"entrypoint\": [ \"scorecard-test\", \"olm-bundle-validation\" ], \"labels\": { \"suite\": \"olm\", \"test\": \"olm-bundle-validation-test\" } }, \"status\": { \"results\": [ { \"name\": \"olm-bundle-validation\", \"log\": \"time=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found metadata directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Getting mediaType info from manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Found annotations file\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Could not find optional dependencies file\\\" name=bundle-test\\n\", \"state\": \"pass\" } ] } } ] }",
"-------------------------------------------------------------------------------- Image: quay.io/operator-framework/scorecard-test:v1.31.0 Entrypoint: [scorecard-test olm-bundle-validation] Labels: \"suite\":\"olm\" \"test\":\"olm-bundle-validation-test\" Results: Name: olm-bundle-validation State: pass Log: time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found metadata directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Getting mediaType info from manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Found annotations file\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Could not find optional dependencies file\" name=bundle-test",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector=test=basic-check-spec-test",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector=suite=olm",
"operator-sdk scorecard <bundle_dir_or_image> -o text --selector='test in (basic-check-spec-test,olm-bundle-validation-test)'",
"apiVersion: scorecard.operatorframework.io/v1alpha3 kind: Configuration metadata: name: config stages: - parallel: true 1 tests: - entrypoint: - scorecard-test - basic-check-spec image: quay.io/operator-framework/scorecard-test:v1.31.0 labels: suite: basic test: basic-check-spec-test - entrypoint: - scorecard-test - olm-bundle-validation image: quay.io/operator-framework/scorecard-test:v1.31.0 labels: suite: olm test: olm-bundle-validation-test",
"// Copyright 2020 The Operator-SDK Authors // // Licensed under the Apache License, Version 2.0 (the \"License\"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an \"AS IS\" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package main import ( \"encoding/json\" \"fmt\" \"log\" \"os\" scapiv1alpha3 \"github.com/operator-framework/api/pkg/apis/scorecard/v1alpha3\" apimanifests \"github.com/operator-framework/api/pkg/manifests\" ) // This is the custom scorecard test example binary // As with the Redhat scorecard test image, the bundle that is under // test is expected to be mounted so that tests can inspect the // bundle contents as part of their test implementations. // The actual test is to be run is named and that name is passed // as an argument to this binary. This argument mechanism allows // this binary to run various tests all from within a single // test image. const PodBundleRoot = \"/bundle\" func main() { entrypoint := os.Args[1:] if len(entrypoint) == 0 { log.Fatal(\"Test name argument is required\") } // Read the pod's untar'd bundle from a well-known path. cfg, err := apimanifests.GetBundleFromDir(PodBundleRoot) if err != nil { log.Fatal(err.Error()) } var result scapiv1alpha3.TestStatus // Names of the custom tests which would be passed in the // `operator-sdk` command. switch entrypoint[0] { case CustomTest1Name: result = CustomTest1(cfg) case CustomTest2Name: result = CustomTest2(cfg) default: result = printValidTests() } // Convert scapiv1alpha3.TestResult to json. prettyJSON, err := json.MarshalIndent(result, \"\", \" \") if err != nil { log.Fatal(\"Failed to generate json\", err) } fmt.Printf(\"%s\\n\", string(prettyJSON)) } // printValidTests will print out full list of test names to give a hint to the end user on what the valid tests are. func printValidTests() scapiv1alpha3.TestStatus { result := scapiv1alpha3.TestResult{} result.State = scapiv1alpha3.FailState result.Errors = make([]string, 0) result.Suggestions = make([]string, 0) str := fmt.Sprintf(\"Valid tests for this image include: %s %s\", CustomTest1Name, CustomTest2Name) result.Errors = append(result.Errors, str) return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{result}, } } const ( CustomTest1Name = \"customtest1\" CustomTest2Name = \"customtest2\" ) // Define any operator specific custom tests here. // CustomTest1 and CustomTest2 are example test functions. Relevant operator specific // test logic is to be implemented in similarly. func CustomTest1(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest1Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func CustomTest2(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest2Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func wrapResult(r scapiv1alpha3.TestResult) scapiv1alpha3.TestStatus { return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{r}, } }",
"operator-sdk bundle validate <bundle_dir_or_image> <flags>",
"./bundle ├── manifests │ ├── cache.my.domain_memcacheds.yaml │ └── memcached-operator.clusterserviceversion.yaml └── metadata └── annotations.yaml",
"INFO[0000] All validation tests have completed successfully",
"ERRO[0000] Error: Value cache.example.com/v1alpha1, Kind=Memcached: CRD \"cache.example.com/v1alpha1, Kind=Memcached\" is present in bundle \"\" but not defined in CSV",
"WARN[0000] Warning: Value : (memcached-operator.v0.0.1) annotations not found INFO[0000] All validation tests have completed successfully",
"operator-sdk bundle validate -h",
"operator-sdk bundle validate <bundle_dir_or_image> --select-optional <test_label>",
"operator-sdk bundle validate ./bundle",
"operator-sdk bundle validate <bundle_registry>/<bundle_image_name>:<tag>",
"operator-sdk bundle validate <bundle_dir_or_image> --select-optional <test_label>",
"ERRO[0000] Error: Value apiextensions.k8s.io/v1, Kind=CustomResource: unsupported media type registry+v1 for bundle object WARN[0000] Warning: Value k8sevent.v0.0.1: owned CRD \"k8sevents.k8s.k8sevent.com\" has an empty description",
"operator-sdk bundle validate ./bundle --select-optional name=multiarch",
"INFO[0020] All validation tests have completed successfully",
"ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.ppc64le) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.s390x) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.amd64) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1] ERRO[0016] Error: Value test-operator.v0.0.1: not all images specified are providing the support described via the CSV labels. Note that (SO.architecture): (linux.arm64) was not found for the image(s) [quay.io/example-org/test-operator:v1alpha1]",
"WARN[0014] Warning: Value test-operator.v0.0.1: check if the CSV is missing the label (operatorframework.io/arch.<value>) for the Arch(s): [\"amd64\" \"arm64\" \"ppc64le\" \"s390x\"]. Be aware that your Operator manager image [\"quay.io/example-org/test-operator:v1alpha1\"] provides this support. Thus, it is very likely that you want to provide it and if you support more than amd64 architectures, you MUST,use the required labels for all which are supported.Otherwise, your solution cannot be listed on the cluster for these architectures",
"// Simple query nn := types.NamespacedName{ Name: \"cluster\", } infraConfig := &configv1.Infrastructure{} err = crClient.Get(context.Background(), nn, infraConfig) if err != nil { return err } fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.InfrastructureTopology)",
"operatorConfigInformer := configinformer.NewSharedInformerFactoryWithOptions(configClient, 2*time.Second) infrastructureLister = operatorConfigInformer.Config().V1().Infrastructures().Lister() infraConfig, err := configClient.ConfigV1().Infrastructures().Get(context.Background(), \"cluster\", metav1.GetOptions{}) if err != nil { return err } // fmt.Printf(\"%v\\n\", infraConfig) fmt.Printf(\"%v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"%v\\n\", infraConfig.Status.InfrastructureTopology)",
"../prometheus",
"package controllers import ( \"github.com/prometheus/client_golang/prometheus\" \"sigs.k8s.io/controller-runtime/pkg/metrics\" ) var ( widgets = prometheus.NewCounter( prometheus.CounterOpts{ Name: \"widgets_total\", Help: \"Number of widgets processed\", }, ) widgetFailures = prometheus.NewCounter( prometheus.CounterOpts{ Name: \"widget_failures_total\", Help: \"Number of failed widgets\", }, ) ) func init() { // Register custom metrics with the global prometheus registry metrics.Registry.MustRegister(widgets, widgetFailures) }",
"func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Add metrics widgets.Inc() widgetFailures.Inc() return ctrl.Result{}, nil }",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-k8s-role namespace: memcached-operator-system rules: - apiGroups: - \"\" resources: - endpoints - pods - services - nodes - secrets verbs: - get - list - watch",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: prometheus-k8s-rolebinding namespace: memcached-operator-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prometheus-k8s-role subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring",
"oc apply -f config/prometheus/role.yaml",
"oc apply -f config/prometheus/rolebinding.yaml",
"oc label namespace <operator_namespace> openshift.io/cluster-monitoring=\"true\"",
"operator-sdk init --plugins=ansible --domain=testmetrics.com",
"operator-sdk create api --group metrics --version v1 --kind Testmetrics --generate-role",
"--- tasks file for Memcached - name: start k8sstatus k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211 - osdk_metric: name: my_thing_counter description: This metric counts things counter: {} - osdk_metric: name: my_counter_metric description: Add 3.14 to the counter counter: increment: yes - osdk_metric: name: my_gauge_metric description: Create my gauge and set it to 2. gauge: set: 2 - osdk_metric: name: my_histogram_metric description: Observe my histogram histogram: observe: 2 - osdk_metric: name: my_summary_metric description: Observe my summary summary: observe: 2",
"make docker-build docker-push IMG=<registry>/<user>/<image_name>:<tag>",
"make install",
"make deploy IMG=<registry>/<user>/<image_name>:<tag>",
"apiVersion: metrics.testmetrics.com/v1 kind: Testmetrics metadata: name: testmetrics-sample spec: size: 1",
"oc create -f config/samples/metrics_v1_testmetrics.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE ansiblemetrics-controller-manager-<id> 2/2 Running 0 149m testmetrics-sample-memcached-<id> 1/1 Running 0 147m",
"oc get ep",
"NAME ENDPOINTS AGE ansiblemetrics-controller-manager-metrics-service 10.129.2.70:8443 150m",
"token=`oc create token prometheus-k8s -n openshift-monitoring`",
"oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep my_counter",
"HELP my_counter_metric Add 3.14 to the counter TYPE my_counter_metric counter my_counter_metric 2",
"oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep gauge",
"HELP my_gauge_metric Create my gauge and set it to 2.",
"oc exec ansiblemetrics-controller-manager-<id> -- curl -k -H \"Authoriza tion: Bearer USDtoken\" 'https://10.129.2.70:8443/metrics' | grep Observe",
"HELP my_histogram_metric Observe my histogram HELP my_summary_metric Observe my summary",
"import ( \"github.com/operator-framework/operator-sdk/pkg/leader\" ) func main() { err = leader.Become(context.TODO(), \"memcached-operator-lock\") if err != nil { log.Error(err, \"Failed to retry for leader lock\") os.Exit(1) } }",
"import ( \"sigs.k8s.io/controller-runtime/pkg/manager\" ) func main() { opts := manager.Options{ LeaderElection: true, LeaderElectionID: \"memcached-operator-lock\" } mgr, err := manager.New(cfg, opts) }",
"docker manifest inspect <image_manifest> 1",
"{ \"manifests\": [ { \"digest\": \"sha256:c0669ef34cdc14332c0f1ab0c2c01acb91d96014b172f1a76f3a39e63d1f0bda\", \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"platform\": { \"architecture\": \"amd64\", \"os\": \"linux\" }, \"size\": 528 }, { \"digest\": \"sha256:30e6d35703c578ee703230b9dc87ada2ba958c1928615ac8a674fcbbcbb0f281\", \"mediaType\": \"application/vnd.docker.distribution.manifest.v2+json\", \"platform\": { \"architecture\": \"arm64\", \"os\": \"linux\", \"variant\": \"v8\" }, \"size\": 528 },",
"docker inspect <image>",
"FROM golang:1.19 as builder ARG TARGETOS ARG TARGETARCH RUN CGO_ENABLED=0 GOOS=USD{TARGETOS:-linux} GOARCH=USD{TARGETARCH} go build -a -o manager main.go 1",
"PLATFORMS ?= linux/arm64,linux/amd64 1 .PHONY: docker-buildx",
"make docker-buildx IMG=<image_registry>/<organization_name>/<repository_name>:<version_or_sha>",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name>",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name> affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: 2 - matchExpressions: 3 - key: kubernetes.io/arch 4 operator: In values: - amd64 - arm64 - ppc64le - s390x - key: kubernetes.io/os 5 operator: In values: - linux",
"Template: corev1.PodTemplateSpec{ Spec: corev1.PodSpec{ Affinity: &corev1.Affinity{ NodeAffinity: &corev1.NodeAffinity{ RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{ NodeSelectorTerms: []corev1.NodeSelectorTerm{ { MatchExpressions: []corev1.NodeSelectorRequirement{ { Key: \"kubernetes.io/arch\", Operator: \"In\", Values: []string{\"amd64\",\"arm64\",\"ppc64le\",\"s390x\"}, }, { Key: \"kubernetes.io/os\", Operator: \"In\", Values: []string{\"linux\"}, }, }, }, }, }, }, }, SecurityContext: &corev1.PodSecurityContext{ }, Containers: []corev1.Container{{ }}, },",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name>",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: containers: - name: <container_name> image: docker.io/<org>/<image_name> affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: 1 - preference: matchExpressions: 2 - key: kubernetes.io/arch 3 operator: In 4 values: - amd64 - arm64 weight: 90 5",
"cfg = Config{ log: logf.Log.WithName(\"prune\"), DryRun: false, Clientset: client, LabelSelector: \"app=<operator_name>\", Resources: []schema.GroupVersionKind{ {Group: \"\", Version: \"\", Kind: PodKind}, }, Namespaces: []string{\"<operator_namespace>\"}, Strategy: StrategyConfig{ Mode: MaxCountStrategy, MaxCountSetting: 1, }, PreDeleteHook: myhook, }",
"err := cfg.Execute(ctx)",
"packagemanifests/ └── etcd ├── 0.0.1 │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml ├── 0.0.2 │ ├── etcdbackup.crd.yaml │ ├── etcdcluster.crd.yaml │ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml │ └── etcdrestore.crd.yaml └── etcd.package.yaml",
"bundle/ ├── bundle-0.0.1 │ ├── bundle.Dockerfile │ ├── manifests │ │ ├── etcdcluster.crd.yaml │ │ ├── etcdoperator.clusterserviceversion.yaml │ ├── metadata │ │ └── annotations.yaml │ └── tests │ └── scorecard │ └── config.yaml └── bundle-0.0.2 ├── bundle.Dockerfile ├── manifests │ ├── etcdbackup.crd.yaml │ ├── etcdcluster.crd.yaml │ ├── etcdoperator.v0.0.2.clusterserviceversion.yaml │ ├── etcdrestore.crd.yaml ├── metadata │ └── annotations.yaml └── tests └── scorecard └── config.yaml",
"operator-sdk pkgman-to-bundle <package_manifests_dir> \\ 1 [--output-dir <directory>] \\ 2 --image-tag-base <image_name_base> 3",
"operator-sdk run bundle <bundle_image_name>:<tag>",
"INFO[0025] Successfully created registry pod: quay-io-my-etcd-0-9-4 INFO[0025] Created CatalogSource: etcd-catalog INFO[0026] OperatorGroup \"operator-sdk-og\" created INFO[0026] Created Subscription: etcdoperator-v0-9-4-sub INFO[0031] Approved InstallPlan install-5t58z for the Subscription: etcdoperator-v0-9-4-sub INFO[0031] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to reach 'Succeeded' phase INFO[0032] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to appear INFO[0048] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Pending INFO[0049] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Installing INFO[0064] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Succeeded INFO[0065] OLM has successfully installed \"etcdoperator.v0.9.4\"",
"operator-sdk <command> [<subcommand>] [<argument>] [<flags>]",
"operator-sdk completion bash",
"bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/operators/developing-operators |
Chapter 10. Preparing and deploying a KVM Guest Image by using RHEL image builder | Chapter 10. Preparing and deploying a KVM Guest Image by using RHEL image builder Use RHEL image builder to create a .qcow2 purpose-built that you can deploy on a Kernel-based Virtual Machine (KVM) based hypervisor. Creating a customized KVM guest image involves the following high-level steps: Create a blueprint for the .qcow2 image. Create a .qcow2 image by using RHEL image builder. Create a virtual machine from the KVM guest image. 10.1. Creating customized KVM guest images by using RHEL image builder You can create a customized .qcow2 KVM guest image by using RHEL image builder. The following procedure shows the steps on the GUI, but you can also use the CLI. Prerequisites You must be in the root or weldr group to access the system. The cockpit-composer package is installed. On a RHEL system, you have opened the RHEL image builder dashboard of the web console. You have created a blueprint. See Creating a blueprint in the web console interface . Procedure Click the blueprint name you created. Select the tab Images . Click Create Image to create your customized image. The Create Image window opens. From the Type drop-down menu list, select QEMU Image(.qcow2) . Set the size that you want the image to be when instantiated and click Create . A small pop-up on the upper right side of the window informs you that the image creation has been added to the queue. After the image creation process is complete, you can see the Image build complete status. Verification Click the breadcrumbs icon and select the Download option. RHEL image builder downloads the KVM guest image .qcow2 file at your default download location. Additional resources Creating a blueprint in the web console interface 10.2. Creating a virtual machine from a KVM guest image With RHEL image builder, you can build a .qcow2 image, and use a KVM guest image to create a VM. The KVM guest images created using RHEL image builder already have cloud-init installed and enabled. Prerequisites You created a .qcow2 image by using RHEL image builder. See Creating a blueprint in the web console interface. You have the qemu-kvm package installed on your system. You can check if the /dev/kvm device is available on your system, and virtualization features are enabled in the BIOS. You have the libvirt and virt-install packages installed on your system. You have the genisoimage utility, that is provided by the xorriso package, installed on your system. Procedure Move the .qcow2 image that you created by using RHEL image builder to the /var/lib/libvirt/images/ directory. Create a directory, for example, cloudinitiso and navigate to this newly created directory: Create a file named meta-data . Add the following information to this file: Create a file named user-data . Add the following information to the file: ssh_authorized_keys is your SSH public key. You can find your SSH public key in ~/.ssh/ <id_rsa.pub>\ . Use the genisoimage utility to create an ISO image that includes the user-data and meta-data files. Create a new VM from the KVM Guest Image using the virt-install command. Include the ISO image you created on step 4 as an attachment to the VM image. --graphics none - means it is a headless RHEL 8 VM. --vcpus 4 - means that it uses 4 virtual CPUs. --memory 4096 - means it uses 4096 MB RAM. The VM installation starts: Verification After the boot is complete, the VM shows a text login interface. To log in to the local console of the VM, use user the details from the user-data file: Enter admin as a username and press Enter . Enter password as password and press Enter . After the login authentication is complete, you have access to the VM using the CLI. Additional resources Enabling virtualization . Configuring and managing cloud-init for RHEL 8 . cloud-init significant directories and files . | [
"mkdir cloudinitiso cd cloudinitiso",
"instance-id: citest local-hostname: vmname",
"#cloud-config user: admin password: password chpasswd: {expire: False} ssh_pwauth: True ssh_authorized_keys: - ssh-rsa AAA...fhHQ== [email protected]",
"genisoimage -output cloud-init.iso -volid cidata -joliet -rock user-data meta-data I: -input-charset not specified, using utf-8 (detected in locale settings) Total translation table size: 0 Total rockridge attributes bytes: 331 Total directory bytes: 0 Path table size(bytes): 10 Max brk space used 0 183 extents written (0 MB)",
"virt-install --memory 4096 --vcpus 4 --name myvm --disk rhel-8-x86_64-kvm.qcow2,device=disk,bus=virtio,format=qcow2 --disk cloud-init.iso,device=cdrom --os-variant rhel 8 --virt-type kvm --graphics none --import",
"Starting install Connected to domain mytestcivm [ OK ] Started Execute cloud user/final scripts. [ OK ] Reached target Cloud-init target. Red Hat Enterprise Linux 8 (Ootpa) Kernel 4.18.0-221.el8.x86_64 on an x86_64"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/composing_a_customized_rhel_system_image/preparing-and-deploying-kvm-guest-images-with-image-builder_composing-a-customized-rhel-system-image |
Chapter 4. Remote health monitoring with connected clusters | Chapter 4. Remote health monitoring with connected clusters 4.1. About remote health monitoring OpenShift Container Platform collects telemetry and configuration data about your cluster and reports it to Red Hat by using the Telemeter Client and the Insights Operator. The data that is provided to Red Hat enables the benefits outlined in this document. A cluster that reports data to Red Hat through Telemetry and the Insights Operator is considered a connected cluster . Telemetry is the term that Red Hat uses to describe the information being sent to Red Hat by the OpenShift Container Platform Telemeter Client. Lightweight attributes are sent from connected clusters to Red Hat to enable subscription management automation, monitor the health of clusters, assist with support, and improve customer experience. The Insights Operator gathers OpenShift Container Platform configuration data and sends it to Red Hat. The data is used to produce insights about potential issues that a cluster might be exposed to. These insights are communicated to cluster administrators on OpenShift Cluster Manager Hybrid Cloud Console . More information is provided in this document about these two processes. Telemetry and Insights Operator benefits Telemetry and the Insights Operator enable the following benefits for end-users: Enhanced identification and resolution of issues . Events that might seem normal to an end-user can be observed by Red Hat from a broader perspective across a fleet of clusters. Some issues can be more rapidly identified from this point of view and resolved without an end-user needing to open a support case or file a Jira issue . Advanced release management . OpenShift Container Platform offers the candidate , fast , and stable release channels, which enable you to choose an update strategy. The graduation of a release from fast to stable is dependent on the success rate of updates and on the events seen during upgrades. With the information provided by connected clusters, Red Hat can improve the quality of releases to stable channels and react more rapidly to issues found in the fast channels. Targeted prioritization of new features and functionality . The data collected provides insights about which areas of OpenShift Container Platform are used most. With this information, Red Hat can focus on developing the new features and functionality that have the greatest impact for our customers. A streamlined support experience . You can provide a cluster ID for a connected cluster when creating a support ticket on the Red Hat Customer Portal . This enables Red Hat to deliver a streamlined support experience that is specific to your cluster, by using the connected information. This document provides more information about that enhanced support experience. Predictive analytics . The insights displayed for your cluster on OpenShift Cluster Manager Hybrid Cloud Console are enabled by the information collected from connected clusters. Red Hat is investing in applying deep learning, machine learning, and artificial intelligence automation to help identify issues that OpenShift Container Platform clusters are exposed to. 4.1.1. About Telemetry Telemetry sends a carefully chosen subset of the cluster monitoring metrics to Red Hat. The Telemeter Client fetches the metrics values every four minutes and thirty seconds and uploads the data to Red Hat. These metrics are described in this document. This stream of data is used by Red Hat to monitor the clusters in real-time and to react as necessary to problems that impact our customers. It also allows Red Hat to roll out OpenShift Container Platform upgrades to customers to minimize service impact and continuously improve the upgrade experience. This debugging information is available to Red Hat Support and Engineering teams with the same restrictions as accessing data reported through support cases. All connected cluster information is used by Red Hat to help make OpenShift Container Platform better and more intuitive to use. Additional resources See the OpenShift Container Platform update documentation for more information about updating or upgrading a cluster. 4.1.1.1. Information collected by Telemetry The following information is collected by Telemetry: 4.1.1.1.1. System information Version information, including the OpenShift Container Platform cluster version and installed update details that are used to determine update version availability Update information, including the number of updates available per cluster, the channel and image repository used for an update, update progress information, and the number of errors that occur in an update The unique random identifier that is generated during an installation Configuration details that help Red Hat Support to provide beneficial support for customers, including node configuration at the cloud infrastructure level, hostnames, IP addresses, Kubernetes pod names, namespaces, and services The OpenShift Container Platform framework components installed in a cluster and their condition and status Events for all namespaces listed as "related objects" for a degraded Operator Information about degraded software Information about the validity of certificates The name of the provider platform that OpenShift Container Platform is deployed on and the data center location 4.1.1.1.2. Sizing Information Sizing information about clusters, machine types, and machines, including the number of CPU cores and the amount of RAM used for each The number of etcd members and the number of objects stored in the etcd cluster Number of application builds by build strategy type 4.1.1.1.3. Usage information Usage information about components, features, and extensions Usage details about Technology Previews and unsupported configurations Telemetry does not collect identifying information such as usernames or passwords. Red Hat does not intend to collect personal information. If Red Hat discovers that personal information has been inadvertently received, Red Hat will delete such information. To the extent that any telemetry data constitutes personal data, please refer to the Red Hat Privacy Statement for more information about Red Hat's privacy practices. Additional resources See Showing data collected by Telemetry for details about how to list the attributes that Telemetry gathers from Prometheus in OpenShift Container Platform. See the upstream cluster-monitoring-operator source code for a list of the attributes that Telemetry gathers from Prometheus. Telemetry is installed and enabled by default. If you need to opt out of remote health reporting, see Opting out of remote health reporting . 4.1.2. About the Insights Operator The Insights Operator periodically gathers configuration and component failure status and, by default, reports that data every two hours to Red Hat. This information enables Red Hat to assess configuration and deeper failure data than is reported through Telemetry. Users of OpenShift Container Platform can display the report of each cluster in the Insights Advisor service on Red Hat Hybrid Cloud Console. If any issues have been identified, Insights provides further details and, if available, steps on how to solve a problem. The Insights Operator does not collect identifying information, such as user names, passwords, or certificates. See Red Hat Insights Data & Application Security for information about Red Hat Insights data collection and controls. Red Hat uses all connected cluster information to: Identify potential cluster issues and provide a solution and preventive actions in the Insights Advisor service on Red Hat Hybrid Cloud Console Improve OpenShift Container Platform by providing aggregated and critical information to product and support teams Make OpenShift Container Platform more intuitive Additional resources The Insights Operator is installed and enabled by default. If you need to opt out of remote health reporting, see Opting out of remote health reporting . 4.1.2.1. Information collected by the Insights Operator The following information is collected by the Insights Operator: General information about your cluster and its components to identify issues that are specific to your OpenShift Container Platform version and environment Configuration files, such as the image registry configuration, of your cluster to determine incorrect settings and issues that are specific to parameters you set Errors that occur in the cluster components Progress information of running updates, and the status of any component upgrades Details of the platform that OpenShift Container Platform is deployed on, such as Amazon Web Services, and the region that the cluster is located in Cluster workload information transformed into discreet Secure Hash Algorithm (SHA) values, which allows Red Hat to assess workloads for security and version vulnerabilities without disclosing sensitive details If an Operator reports an issue, information is collected about core OpenShift Container Platform pods in the openshift-* and kube-* projects. This includes state, resource, security context, volume information, and more. Additional resources See Showing data collected by the Insights Operator for details about how to review the data that is collected by the Insights Operator. The Insights Operator source code is available for review and contribution. See the Insights Operator upstream project for a list of the items collected by the Insights Operator. 4.1.3. Understanding Telemetry and Insights Operator data flow The Telemeter Client collects selected time series data from the Prometheus API. The time series data is uploaded to api.openshift.com every four minutes and thirty seconds for processing. The Insights Operator gathers selected data from the Kubernetes API and the Prometheus API into an archive. The archive is uploaded to OpenShift Cluster Manager Hybrid Cloud Console every two hours for processing. The Insights Operator also downloads the latest Insights analysis from OpenShift Cluster Manager Hybrid Cloud Console . This is used to populate the Insights status pop-up that is included in the Overview page in the OpenShift Container Platform web console. All of the communication with Red Hat occurs over encrypted channels by using Transport Layer Security (TLS) and mutual certificate authentication. All of the data is encrypted in transit and at rest. Access to the systems that handle customer data is controlled through multi-factor authentication and strict authorization controls. Access is granted on a need-to-know basis and is limited to required operations. Telemetry and Insights Operator data flow Additional resources See Monitoring overview for more information about the OpenShift Container Platform monitoring stack. See Configuring your firewall for details about configuring a firewall and enabling endpoints for Telemetry and Insights 4.1.4. Additional details about how remote health monitoring data is used The information collected to enable remote health monitoring is detailed in Information collected by Telemetry and Information collected by the Insights Operator . As further described in the preceding sections of this document, Red Hat collects data about your use of the Red Hat Product(s) for purposes such as providing support and upgrades, optimizing performance or configuration, minimizing service impacts, identifying and remediating threats, troubleshooting, improving the offerings and user experience, responding to issues, and for billing purposes if applicable. Collection safeguards Red Hat employs technical and organizational measures designed to protect the telemetry and configuration data. Sharing Red Hat may share the data collected through Telemetry and the Insights Operator internally within Red Hat to improve your user experience. Red Hat may share telemetry and configuration data with its business partners in an aggregated form that does not identify customers to help the partners better understand their markets and their customers' use of Red Hat offerings or to ensure the successful integration of products jointly supported by those partners. Third parties Red Hat may engage certain third parties to assist in the collection, analysis, and storage of the Telemetry and configuration data. User control / enabling and disabling telemetry and configuration data collection You may disable OpenShift Container Platform Telemetry and the Insights Operator by following the instructions in Opting out of remote health reporting . 4.2. Showing data collected by remote health monitoring As an administrator, you can review the metrics collected by Telemetry and the Insights Operator. 4.2.1. Showing data collected by Telemetry You can view the cluster and components time series data captured by Telemetry. Prerequisites You have installed the OpenShift Container Platform CLI ( oc ). You have access to the cluster as a user with the cluster-admin role or the cluster-monitoring-view role. Procedure Log in to a cluster. Run the following command, which queries a cluster's Prometheus service and returns the full set of time series data captured by Telemetry: USD curl -G -k -H "Authorization: Bearer USD(oc whoami -t)" \ https://USD(oc get route prometheus-k8s-federate -n openshift-monitoring -o \ jsonpath="{.spec.host}")/federate \ --data-urlencode 'match[]={__name__=~"cluster:usage:.*"}' \ --data-urlencode 'match[]={__name__="count:up0"}' \ --data-urlencode 'match[]={__name__="count:up1"}' \ --data-urlencode 'match[]={__name__="cluster_version"}' \ --data-urlencode 'match[]={__name__="cluster_version_available_updates"}' \ --data-urlencode 'match[]={__name__="cluster_version_capability"}' \ --data-urlencode 'match[]={__name__="cluster_operator_up"}' \ --data-urlencode 'match[]={__name__="cluster_operator_conditions"}' \ --data-urlencode 'match[]={__name__="cluster_version_payload"}' \ --data-urlencode 'match[]={__name__="cluster_installer"}' \ --data-urlencode 'match[]={__name__="cluster_infrastructure_provider"}' \ --data-urlencode 'match[]={__name__="cluster_feature_set"}' \ --data-urlencode 'match[]={__name__="instance:etcd_object_counts:sum"}' \ --data-urlencode 'match[]={__name__="ALERTS",alertstate="firing"}' \ --data-urlencode 'match[]={__name__="code:apiserver_request_total:rate:sum"}' \ --data-urlencode 'match[]={__name__="cluster:capacity_cpu_cores:sum"}' \ --data-urlencode 'match[]={__name__="cluster:capacity_memory_bytes:sum"}' \ --data-urlencode 'match[]={__name__="cluster:cpu_usage_cores:sum"}' \ --data-urlencode 'match[]={__name__="cluster:memory_usage_bytes:sum"}' \ --data-urlencode 'match[]={__name__="openshift:cpu_usage_cores:sum"}' \ --data-urlencode 'match[]={__name__="openshift:memory_usage_bytes:sum"}' \ --data-urlencode 'match[]={__name__="workload:cpu_usage_cores:sum"}' \ --data-urlencode 'match[]={__name__="workload:memory_usage_bytes:sum"}' \ --data-urlencode 'match[]={__name__="cluster:virt_platform_nodes:sum"}' \ --data-urlencode 'match[]={__name__="cluster:node_instance_type_count:sum"}' \ --data-urlencode 'match[]={__name__="cnv:vmi_status_running:count"}' \ --data-urlencode 'match[]={__name__="cluster:vmi_request_cpu_cores:sum"}' \ --data-urlencode 'match[]={__name__="node_role_os_version_machine:cpu_capacity_cores:sum"}' \ --data-urlencode 'match[]={__name__="node_role_os_version_machine:cpu_capacity_sockets:sum"}' \ --data-urlencode 'match[]={__name__="subscription_sync_total"}' \ --data-urlencode 'match[]={__name__="olm_resolution_duration_seconds"}' \ --data-urlencode 'match[]={__name__="csv_succeeded"}' \ --data-urlencode 'match[]={__name__="csv_abnormal"}' \ --data-urlencode 'match[]={__name__="cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum"}' \ --data-urlencode 'match[]={__name__="cluster:kubelet_volume_stats_used_bytes:provisioner:sum"}' \ --data-urlencode 'match[]={__name__="ceph_cluster_total_bytes"}' \ --data-urlencode 'match[]={__name__="ceph_cluster_total_used_raw_bytes"}' \ --data-urlencode 'match[]={__name__="ceph_health_status"}' \ --data-urlencode 'match[]={__name__="job:ceph_osd_metadata:count"}' \ --data-urlencode 'match[]={__name__="job:kube_pv:count"}' \ --data-urlencode 'match[]={__name__="job:ceph_pools_iops:total"}' \ --data-urlencode 'match[]={__name__="job:ceph_pools_iops_bytes:total"}' \ --data-urlencode 'match[]={__name__="job:ceph_versions_running:count"}' \ --data-urlencode 'match[]={__name__="job:noobaa_total_unhealthy_buckets:sum"}' \ --data-urlencode 'match[]={__name__="job:noobaa_bucket_count:sum"}' \ --data-urlencode 'match[]={__name__="job:noobaa_total_object_count:sum"}' \ --data-urlencode 'match[]={__name__="noobaa_accounts_num"}' \ --data-urlencode 'match[]={__name__="noobaa_total_usage"}' \ --data-urlencode 'match[]={__name__="console_url"}' \ --data-urlencode 'match[]={__name__="cluster:ovnkube_master_egress_routing_via_host:max"}' \ --data-urlencode 'match[]={__name__="cluster:network_attachment_definition_instances:max"}' \ --data-urlencode 'match[]={__name__="cluster:network_attachment_definition_enabled_instance_up:max"}' \ --data-urlencode 'match[]={__name__="cluster:ingress_controller_aws_nlb_active:sum"}' \ --data-urlencode 'match[]={__name__="insightsclient_request_send_total"}' \ --data-urlencode 'match[]={__name__="cam_app_workload_migrations"}' \ --data-urlencode 'match[]={__name__="cluster:apiserver_current_inflight_requests:sum:max_over_time:2m"}' \ --data-urlencode 'match[]={__name__="cluster:alertmanager_integrations:max"}' \ --data-urlencode 'match[]={__name__="cluster:telemetry_selected_series:count"}' \ --data-urlencode 'match[]={__name__="openshift:prometheus_tsdb_head_series:sum"}' \ --data-urlencode 'match[]={__name__="openshift:prometheus_tsdb_head_samples_appended_total:sum"}' \ --data-urlencode 'match[]={__name__="monitoring:container_memory_working_set_bytes:sum"}' \ --data-urlencode 'match[]={__name__="namespace_job:scrape_series_added:topk3_sum1h"}' \ --data-urlencode 'match[]={__name__="namespace_job:scrape_samples_post_metric_relabeling:topk3"}' \ --data-urlencode 'match[]={__name__="monitoring:haproxy_server_http_responses_total:sum"}' \ --data-urlencode 'match[]={__name__="rhmi_status"}' \ --data-urlencode 'match[]={__name__="cluster_legacy_scheduler_policy"}' \ --data-urlencode 'match[]={__name__="cluster_master_schedulable"}' \ --data-urlencode 'match[]={__name__="che_workspace_status"}' \ --data-urlencode 'match[]={__name__="che_workspace_started_total"}' \ --data-urlencode 'match[]={__name__="che_workspace_failure_total"}' \ --data-urlencode 'match[]={__name__="che_workspace_start_time_seconds_sum"}' \ --data-urlencode 'match[]={__name__="che_workspace_start_time_seconds_count"}' \ --data-urlencode 'match[]={__name__="cco_credentials_mode"}' \ --data-urlencode 'match[]={__name__="cluster:kube_persistentvolume_plugin_type_counts:sum"}' \ --data-urlencode 'match[]={__name__="visual_web_terminal_sessions_total"}' \ --data-urlencode 'match[]={__name__="acm_managed_cluster_info"}' \ --data-urlencode 'match[]={__name__="cluster:vsphere_vcenter_info:sum"}' \ --data-urlencode 'match[]={__name__="cluster:vsphere_esxi_version_total:sum"}' \ --data-urlencode 'match[]={__name__="cluster:vsphere_node_hw_version_total:sum"}' \ --data-urlencode 'match[]={__name__="openshift:build_by_strategy:sum"}' \ --data-urlencode 'match[]={__name__="rhods_aggregate_availability"}' \ --data-urlencode 'match[]={__name__="rhods_total_users"}' \ --data-urlencode 'match[]={__name__="instance:etcd_disk_wal_fsync_duration_seconds:histogram_quantile",quantile="0.99"}' \ --data-urlencode 'match[]={__name__="instance:etcd_mvcc_db_total_size_in_bytes:sum"}' \ --data-urlencode 'match[]={__name__="instance:etcd_network_peer_round_trip_time_seconds:histogram_quantile",quantile="0.99"}' \ --data-urlencode 'match[]={__name__="instance:etcd_mvcc_db_total_size_in_use_in_bytes:sum"}' \ --data-urlencode 'match[]={__name__="instance:etcd_disk_backend_commit_duration_seconds:histogram_quantile",quantile="0.99"}' \ --data-urlencode 'match[]={__name__="jaeger_operator_instances_storage_types"}' \ --data-urlencode 'match[]={__name__="jaeger_operator_instances_strategies"}' \ --data-urlencode 'match[]={__name__="jaeger_operator_instances_agent_strategies"}' \ --data-urlencode 'match[]={__name__="appsvcs:cores_by_product:sum"}' \ --data-urlencode 'match[]={__name__="nto_custom_profiles:count"}' \ --data-urlencode 'match[]={__name__="openshift_csi_share_configmap"}' \ --data-urlencode 'match[]={__name__="openshift_csi_share_secret"}' \ --data-urlencode 'match[]={__name__="openshift_csi_share_mount_failures_total"}' \ --data-urlencode 'match[]={__name__="openshift_csi_share_mount_requests_total"}' \ --data-urlencode 'match[]={__name__="cluster:velero_backup_total:max"}' \ --data-urlencode 'match[]={__name__="cluster:velero_restore_total:max"}' \ --data-urlencode 'match[]={__name__="eo_es_storage_info"}' \ --data-urlencode 'match[]={__name__="eo_es_redundancy_policy_info"}' \ --data-urlencode 'match[]={__name__="eo_es_defined_delete_namespaces_total"}' \ --data-urlencode 'match[]={__name__="eo_es_misconfigured_memory_resources_info"}' \ --data-urlencode 'match[]={__name__="cluster:eo_es_data_nodes_total:max"}' \ --data-urlencode 'match[]={__name__="cluster:eo_es_documents_created_total:sum"}' \ --data-urlencode 'match[]={__name__="cluster:eo_es_documents_deleted_total:sum"}' \ --data-urlencode 'match[]={__name__="pod:eo_es_shards_total:max"}' \ --data-urlencode 'match[]={__name__="eo_es_cluster_management_state_info"}' \ --data-urlencode 'match[]={__name__="imageregistry:imagestreamtags_count:sum"}' \ --data-urlencode 'match[]={__name__="imageregistry:operations_count:sum"}' \ --data-urlencode 'match[]={__name__="log_logging_info"}' \ --data-urlencode 'match[]={__name__="log_collector_error_count_total"}' \ --data-urlencode 'match[]={__name__="log_forwarder_pipeline_info"}' \ --data-urlencode 'match[]={__name__="log_forwarder_input_info"}' \ --data-urlencode 'match[]={__name__="log_forwarder_output_info"}' \ --data-urlencode 'match[]={__name__="cluster:log_collected_bytes_total:sum"}' \ --data-urlencode 'match[]={__name__="cluster:log_logged_bytes_total:sum"}' \ --data-urlencode 'match[]={__name__="cluster:kata_monitor_running_shim_count:sum"}' 4.2.2. Showing data collected by the Insights Operator You can review the data that is collected by the Insights Operator. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Find the name of the currently running pod for the Insights Operator: USD INSIGHTS_OPERATOR_POD=USD(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running) Copy the recent data archives collected by the Insights Operator: USD oc cp openshift-insights/USDINSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data The recent Insights Operator archives are now available in the insights-data directory. 4.3. Opting out of remote health reporting You may choose to opt out of reporting health and usage data for your cluster. To opt out of remote health reporting, you must: Modify the global cluster pull secret to disable remote health reporting. Update the cluster to use this modified pull secret. 4.3.1. Consequences of disabling remote health reporting In OpenShift Container Platform, customers can opt out of reporting usage information. However, connected clusters allow Red Hat to react more quickly to problems and better support our customers, as well as better understand how product upgrades impact clusters. Connected clusters also help to simplify the subscription and entitlement process and enable the OpenShift Cluster Manager service to provide an overview of your clusters and their subscription status. Red Hat strongly recommends leaving health and usage reporting enabled for pre-production and test clusters even if it is necessary to opt out for production clusters. This allows Red Hat to be a participant in qualifying OpenShift Container Platform in your environments and react more rapidly to product issues. Some of the consequences of opting out of having a connected cluster are: Red Hat will not be able to monitor the success of product upgrades or the health of your clusters without a support case being opened. Red Hat will not be able to use configuration data to better triage customer support cases and identify which configurations our customers find important. The OpenShift Cluster Manager will not show data about your clusters including health and usage information. Your subscription entitlement information must be manually entered via console.redhat.com without the benefit of automatic usage reporting. In restricted networks, Telemetry and Insights data can still be reported through appropriate configuration of your proxy. 4.3.2. Modifying the global cluster pull secret to disable remote health reporting You can modify your existing global cluster pull secret to disable remote health reporting. This disables both Telemetry and the Insights Operator. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Download the global cluster pull secret to your local file system. USD oc extract secret/pull-secret -n openshift-config --to=. In a text editor, edit the .dockerconfigjson file that was downloaded. Remove the cloud.openshift.com JSON entry, for example: "cloud.openshift.com":{"auth":"<hash>","email":"<email_address>"} Save the file. You can now update your cluster to use this modified pull secret. 4.3.3. Registering your disconnected cluster Register your disconnected OpenShift Container Platform cluster on the Red Hat Hybrid Cloud Console so that your cluster is not impacted by the consequences listed in the section named "Consequences of disabling remote health reporting". Important By registering your disconnected cluster, you can continue to report your subscription usage to Red Hat. In turn, Red Hat can return accurate usage and capacity trends associated with your subscription, so that you can use the returned information to better organize subscription allocations across all of your resources. Prerequisites You are logged in to the OpenShift Container Platform web console as cluster-admin . You can log in to the Red Hat Hybrid Cloud Console. Procedure Go to the Register disconnected cluster web page on the Red Hat Hybrid Cloud Console. Optional: To access the Register disconnected cluster web page from the home page of the Red Hat Hybrid Cloud Console, go to the Clusters navigation menu item and then select the Register cluster button. Enter your cluster's details in the provided fields on the Register disconnected cluster page. From the Subscription settings section of the page, select the subcription settings that apply to your Red Hat subscription offering. To register your disconnected cluster, select the Register cluster button. Additional resources Consequences of disabling remote health reporting How does the subscriptions service show my subscription data? (Getting Started with the Subscription Service) 4.3.4. Updating the global cluster pull secret You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret. The procedure is required when users use a separate registry to store images than the registry used during installation. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Optional: To append a new pull secret to the existing pull secret, complete the following steps: Enter the following command to download the pull secret: USD oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' ><pull_secret_location> 1 1 Provide the path to the pull secret file. Enter the following command to add the new pull secret: USD oc registry login --registry="<registry>" \ 1 --auth-basic="<username>:<password>" \ 2 --to=<pull_secret_location> 3 1 Provide the new registry. You can include multiple repositories within the same registry, for example: --registry="<registry/my-namespace/my-repository>" . 2 Provide the credentials of the new registry. 3 Provide the path to the pull secret file. Alternatively, you can perform a manual update to the pull secret file. Enter the following command to update the global pull secret for your cluster: USD oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1 1 Provide the path to the new pull secret file. This update is rolled out to all nodes, which can take some time depending on the size of your cluster. Note As of OpenShift Container Platform 4.7.4, changes to the global pull secret no longer trigger a node drain or reboot. 4.4. Enabling remote health reporting If you or your organization have disabled remote health reporting, you can enable this feature again. You can see that remote health reporting is disabled from the message "Insights not available" in the Status tile on the OpenShift Container Platform Web Console Overview page. To enable remote health reporting, you must Modify the global cluster pull secret with a new authorization token. Note Enabling remote health reporting enables both Insights Operator and Telemetry. 4.4.1. Modifying your global cluster pull secret to enable remote health reporting You can modify your existing global cluster pull secret to enable remote health reporting. If you have previously disabled remote health monitoring, you must first download a new pull secret with your console.openshift.com access token from Red Hat OpenShift Cluster Manager. Prerequisites Access to the cluster as a user with the cluster-admin role. Access to OpenShift Cluster Manager. Procedure Navigate to https://console.redhat.com/openshift/downloads . From Tokens Pull Secret , click Download . The file pull-secret.txt containing your cloud.openshift.com access token in JSON format downloads: { "auths": { "cloud.openshift.com": { "auth": " <your_token> ", "email": " <email_address> " } } } Download the global cluster pull secret to your local file system. USD oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' > pull-secret Make a backup copy of your pull secret. USD cp pull-secret pull-secret-backup Open the pull-secret file in a text editor. Append the cloud.openshift.com JSON entry from pull-secret.txt into auths . Save the file. Update the secret in your cluster. oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull-secret It may take several minutes for the secret to update and your cluster to begin reporting. Verification Navigate to the OpenShift Container Platform Web Console Overview page. Insights in the Status tile reports the number of issues found. 4.5. Using Insights to identify issues with your cluster Insights repeatedly analyzes the data Insights Operator sends. Users of OpenShift Container Platform can display the report in the Insights Advisor service on Red Hat Hybrid Cloud Console. 4.5.1. About Red Hat Insights Advisor for OpenShift Container Platform You can use Insights Advisor to assess and monitor the health of your OpenShift Container Platform clusters. Whether you are concerned about individual clusters, or with your whole infrastructure, it is important to be aware of the exposure of your cluster infrastructure to issues that can affect service availability, fault tolerance, performance, or security. Using cluster data collected by the Insights Operator, Insights repeatedly compares that data against a library of recommendations . Each recommendation is a set of cluster-environment conditions that can leave OpenShift Container Platform clusters at risk. The results of the Insights analysis are available in the Insights Advisor service on Red Hat Hybrid Cloud Console. In the Console, you can perform the following actions: See clusters impacted by a specific recommendation. Use robust filtering capabilities to refine your results to those recommendations. Learn more about individual recommendations, details about the risks they present, and get resolutions tailored to your individual clusters. Share results with other stakeholders. 4.5.2. Understanding Insights Advisor recommendations Insights Advisor bundles information about various cluster states and component configurations that can negatively affect the service availability, fault tolerance, performance, or security of your clusters. This information set is called a recommendation in Insights Advisor and includes the following information: Name: A concise description of the recommendation Added: When the recommendation was published to the Insights Advisor archive Category: Whether the issue has the potential to negatively affect service availability, fault tolerance, performance, or security Total risk: A value derived from the likelihood that the condition will negatively affect your infrastructure, and the impact on operations if that were to happen Clusters: A list of clusters on which a recommendation is detected Description: A brief synopsis of the issue, including how it affects your clusters Link to associated topics: More information from Red Hat about the issue 4.5.3. Displaying potential issues with your cluster This section describes how to display the Insights report in Insights Advisor on OpenShift Cluster Manager Hybrid Cloud Console . Note that Insights repeatedly analyzes your cluster and shows the latest results. These results can change, for example, if you fix an issue or a new issue has been detected. Prerequisites Your cluster is registered on OpenShift Cluster Manager Hybrid Cloud Console . Remote health reporting is enabled, which is the default. You are logged in to OpenShift Cluster Manager Hybrid Cloud Console . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager Hybrid Cloud Console . Depending on the result, Insights Advisor displays one of the following: No matching recommendations found , if Insights did not identify any issues. A list of issues Insights has detected, grouped by risk (low, moderate, important, and critical). No clusters yet , if Insights has not yet analyzed the cluster. The analysis starts shortly after the cluster has been installed, registered, and connected to the internet. If any issues are displayed, click the > icon in front of the entry for more details. Depending on the issue, the details can also contain a link to more information from Red Hat about the issue. 4.5.4. Displaying all Insights Advisor recommendations The Recommendations view, by default, only displays the recommendations that are detected on your clusters. However, you can view all of the recommendations in the advisor archive. Prerequisites Remote health reporting is enabled, which is the default. Your cluster is registered on Red Hat Hybrid Cloud Console. You are logged in to OpenShift Cluster Manager Hybrid Cloud Console . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager Hybrid Cloud Console . Click the X icons to the Clusters Impacted and Status filters. You can now browse through all of the potential recommendations for your cluster. 4.5.5. Advisor recommendation filters The Insights advisor service can return a large number of recommendations. To focus on your most critical recommendations, you can apply filters to the Advisor recommendations list to remove low-priority recommendations. By default, filters are set to only show enabled recommendations that are impacting one or more clusters. To view all or disabled recommendations in the Insights library, you can customize the filters. To apply a filter, select a filter type and then set its value based on the options that are available in the drop-down list. You can apply multiple filters to the list of recommendations. You can set the following filter types: Name: Search for a recommendation by name. Total risk: Select one or more values from Critical , Important , Moderate , and Low indicating the likelihood and the severity of a negative impact on a cluster. Impact: Select one or more values from Critical , High , Medium , and Low indicating the potential impact to the continuity of cluster operations. Likelihood: Select one or more values from Critical , High , Medium , and Low indicating the potential for a negative impact to a cluster if the recommendation comes to fruition. Category: Select one or more categories from Service Availability , Performance , Fault Tolerance , Security , and Best Practice to focus your attention on. Status: Click a radio button to show enabled recommendations (default), disabled recommendations, or all recommendations. Clusters impacted: Set the filter to show recommendations currently impacting one or more clusters, non-impacting recommendations, or all recommendations. Risk of change: Select one or more values from High , Moderate , Low , and Very low indicating the risk that the implementation of the resolution could have on cluster operations. 4.5.5.1. Filtering Insights advisor recommendations As an OpenShift Container Platform cluster manager, you can filter the recommendations that are displayed on the recommendations list. By applying filters, you can reduce the number of reported recommendations and concentrate on your highest priority recommendations. The following procedure demonstrates how to set and remove Category filters; however, the procedure is applicable to any of the filter types and respective values. Prerequisites You are logged in to the OpenShift Cluster Manager Hybrid Cloud Console . Procedure Go to Red Hat Hybrid Cloud Console OpenShift Advisor recommendations . In the main, filter-type drop-down list, select the Category filter type. Expand the filter-value drop-down list and select the checkbox to each category of recommendation you want to view. Leave the checkboxes for unnecessary categories clear. Optional: Add additional filters to further refine the list. Only recommendations from the selected categories are shown in the list. Verification After applying filters, you can view the updated recommendations list. The applied filters are added to the default filters. 4.5.5.2. Removing filters from Insights Advisor recommendations You can apply multiple filters to the list of recommendations. When ready, you can remove them individually or completely reset them. Removing filters individually Click the X icon to each filter, including the default filters, to remove them individually. Removing all non-default filters Click Reset filters to remove only the filters that you applied, leaving the default filters in place. 4.5.6. Disabling Insights Advisor recommendations You can disable specific recommendations that affect your clusters, so that they no longer appear in your reports. It is possible to disable a recommendation for a single cluster or all of your clusters. Note Disabling a recommendation for all of your clusters also applies to any future clusters. Prerequisites Remote health reporting is enabled, which is the default. Your cluster is registered on OpenShift Cluster Manager Hybrid Cloud Console . You are logged in to OpenShift Cluster Manager Hybrid Cloud Console . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager Hybrid Cloud Console . Optional: Use the Clusters Impacted and Status filters as needed. Disable an alert by using one of the following methods: To disable an alert: Click the Options menu for that alert, and then click Disable recommendation . Enter a justification note and click Save . To view the clusters affected by this alert before disabling the alert: Click the name of the recommendation to disable. You are directed to the single recommendation page. Review the list of clusters in the Affected clusters section. Click Actions Disable recommendation to disable the alert for all of your clusters. Enter a justification note and click Save . 4.5.7. Enabling a previously disabled Insights Advisor recommendation When a recommendation is disabled for all clusters, you no longer see the recommendation in the Insights Advisor. You can change this behavior. Prerequisites Remote health reporting is enabled, which is the default. Your cluster is registered on OpenShift Cluster Manager Hybrid Cloud Console . You are logged in to OpenShift Cluster Manager Hybrid Cloud Console . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager Hybrid Cloud Console . Filter the recommendations to display on the disabled recommendations: From the Status drop-down menu, select Status . From the Filter by status drop-down menu, select Disabled . Optional: Clear the Clusters impacted filter. Locate the recommendation to enable. Click the Options menu , and then click Enable recommendation . 4.5.8. Displaying the Insights status in the web console Insights repeatedly analyzes your cluster and you can display the status of identified potential issues of your cluster in the OpenShift Container Platform web console. This status shows the number of issues in the different categories and, for further details, links to the reports in OpenShift Cluster Manager Hybrid Cloud Console . Prerequisites Your cluster is registered in OpenShift Cluster Manager Hybrid Cloud Console . Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console. Procedure Navigate to Home Overview in the OpenShift Container Platform web console. Click Insights on the Status card. The pop-up window lists potential issues grouped by risk. Click the individual categories or View all recommendations in Insights Advisor to display more details. 4.6. Using the Insights Operator The Insights Operator periodically gathers configuration and component failure status and, by default, reports that data every two hours to Red Hat. This information enables Red Hat to assess configuration and deeper failure data than is reported through Telemetry. Users of OpenShift Container Platform can display the report in the Insights Advisor service on Red Hat Hybrid Cloud Console. Additional resources The Insights Operator is installed and enabled by default. If you need to opt out of remote health reporting, see Opting out of remote health reporting . For more information on using Insights Advisor to identify issues with your cluster, see Using Insights to identify issues with your cluster . 4.6.1. Downloading your Insights Operator archive Insights Operator stores gathered data in an archive located in the openshift-insights namespace of your cluster. You can download and review the data that is gathered by the Insights Operator. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Find the name of the running pod for the Insights Operator: USD oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running Copy the recent data archives collected by the Insights Operator: USD oc cp openshift-insights/<insights_operator_pod_name>:/var/lib/insights-operator ./insights-data 1 1 Replace <insights_operator_pod_name> with the pod name output from the preceding command. The recent Insights Operator archives are now available in the insights-data directory. 4.6.2. Viewing Insights Operator gather durations You can view the time it takes for the Insights Operator to gather the information contained in the archive. This helps you to understand Insights Operator resource usage and issues with Insights Advisor. Prerequisites A recent copy of your Insights Operator archive. Procedure From your archive, open /insights-operator/gathers.json . The file contains a list of Insights Operator gather operations: { "name": "clusterconfig/authentication", "duration_in_ms": 730, 1 "records_count": 1, "errors": null, "panic": null } 1 duration_in_ms is the amount of time in milliseconds for each gather operation. Inspect each gather operation for abnormalities. 4.7. Using remote health reporting in a restricted network You can manually gather and upload Insights Operator archives to diagnose issues from a restricted network. To use the Insights Operator in a restricted network, you must: Create a copy of your Insights Operator archive. Upload the Insights Operator archive to console.redhat.com . Additionally, you can choose to obfuscate the Insights Operator data before upload. 4.7.1. Running an Insights Operator gather operation You must run a gather operation to create an Insights Operator archive. Prerequisites You are logged in to OpenShift Container Platform as cluster-admin . Procedure Create a file named gather-job.yaml using this template: apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job annotations: config.openshift.io/inject-proxy: insights-operator spec: backoffLimit: 6 ttlSecondsAfterFinished: 600 template: spec: restartPolicy: OnFailure serviceAccountName: operator nodeSelector: beta.kubernetes.io/os: linux node-role.kubernetes.io/master: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 900 - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 900 volumes: - name: snapshots emptyDir: {} - name: service-ca-bundle configMap: name: service-ca-bundle optional: true initContainers: - name: insights-operator image: quay.io/openshift/origin-insights-operator:latest terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - name: snapshots mountPath: /var/lib/insights-operator - name: service-ca-bundle mountPath: /var/run/configmaps/service-ca-bundle readOnly: true ports: - containerPort: 8443 name: https resources: requests: cpu: 10m memory: 70Mi args: - gather - -v=4 - --config=/etc/insights-operator/server.yaml containers: - name: sleepy image: quay.io/openshift/origin-base:latest args: - /bin/sh - -c - sleep 10m volumeMounts: [{name: snapshots, mountPath: /var/lib/insights-operator}] Copy your insights-operator image version: USD oc get -n openshift-insights deployment insights-operator -o yaml Example output apiVersion: apps/v1 kind: Deployment metadata: name: insights-operator namespace: openshift-insights # ... spec: template: # ... spec: containers: - args: # ... image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1 # ... 1 Specifies your insights-operator image version. Paste your image version in gather-job.yaml : apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job # ... spec: # ... template: spec: initContainers: - name: insights-operator image: image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1 terminationMessagePolicy: FallbackToLogsOnError volumeMounts: 1 Replace any existing value with your insights-operator image version. Create the gather job: USD oc apply -n openshift-insights -f gather-job.yaml Find the name of the job pod: USD oc describe -n openshift-insights job/insights-operator-job Example output Name: insights-operator-job Namespace: openshift-insights # ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 7m18s job-controller Created pod: insights-operator-job-<your_job> where insights-operator-job-<your_job> is the name of the pod. Verify that the operation has finished: USD oc logs -n openshift-insights insights-operator-job-<your_job> insights-operator Example output I0407 11:55:38.192084 1 diskrecorder.go:34] Wrote 108 records to disk in 33ms Save the created archive: USD oc cp openshift-insights/insights-operator-job- <your_job> :/var/lib/insights-operator ./insights-data Clean up the job: USD oc delete -n openshift-insights job insights-operator-job 4.7.2. Uploading an Insights Operator archive You can manually upload an Insights Operator archive to console.redhat.com to diagnose potential issues. Prerequisites You are logged in to OpenShift Container Platform as cluster-admin . You have a workstation with unrestricted internet access. You have created a copy of the Insights Operator archive. Procedure Download the dockerconfig.json file: USD oc extract secret/pull-secret -n openshift-config --to=. Copy your "cloud.openshift.com" "auth" token from the dockerconfig.json file: { "auths": { "cloud.openshift.com": { "auth": " <your_token> ", "email": "[email protected]" } } Upload the archive to console.redhat.com : USD curl -v -H "User-Agent: insights-operator/one10time200gather184a34f6a168926d93c330 cluster/ <cluster_id> " -H "Authorization: Bearer <your_token> " -F "upload=@ <path_to_archive> ; type=application/vnd.redhat.openshift.periodic+tar" https://console.redhat.com/api/ingress/v1/upload where <cluster_id> is your cluster ID, <your_token> is the token from your pull secret, and <path_to_archive> is the path to the Insights Operator archive. If the operation is successful, the command returns a "request_id" and "account_number" : Example output * Connection #0 to host console.redhat.com left intact {"request_id":"393a7cf1093e434ea8dd4ab3eb28884c","upload":{"account_number":"6274079"}}% Verification steps Log in to https://console.redhat.com/openshift . Click the Clusters menu in the left pane. To display the details of the cluster, click the cluster name. Open the Insights Advisor tab of the cluster. If the upload was successful, the tab displays one of the following: Your cluster passed all recommendations , if Insights Advisor did not identify any issues. A list of issues that Insights Advisor has detected, prioritized by risk (low, moderate, important, and critical). 4.7.3. Enabling Insights Operator data obfuscation You can enable obfuscation to mask sensitive and identifiable IPv4 addresses and cluster base domains that the Insights Operator sends to console.redhat.com . Warning Although this feature is available, Red Hat recommends keeping obfuscation disabled for a more effective support experience. Obfuscation assigns non-identifying values to cluster IPv4 addresses, and uses a translation table that is retained in memory to change IP addresses to their obfuscated versions throughout the Insights Operator archive before uploading the data to console.redhat.com . For cluster base domains, obfuscation changes the base domain to a hardcoded substring. For example, cluster-api.openshift.example.com becomes cluster-api.<CLUSTER_BASE_DOMAIN> . The following procedure enables obfuscation using the support secret in the openshift-config namespace. Prerequisites You are logged in to the OpenShift Container Platform web console as cluster-admin . Procedure Navigate to Workloads Secrets . Select the openshift-config project. Search for the support secret using the Search by name field. If it does not exist, click Create Key/value secret to create it. Click the Options menu , and then click Edit Secret . Click Add Key/Value . Create a key named enableGlobalObfuscation with a value of true , and click Save . Navigate to Workloads Pods Select the openshift-insights project. Find the insights-operator pod. To restart the insights-operator pod, click the Options menu , and then click Delete Pod . Verification Navigate to Workloads Secrets . Select the openshift-insights project. Search for the obfuscation-translation-table secret using the Search by name field. If the obfuscation-translation-table secret exists, then obfuscation is enabled and working. Alternatively, you can inspect /insights-operator/gathers.json in your Insights Operator archive for the value "is_global_obfuscation_enabled": true . Additional resources For more information on how to download your Insights Operator archive, see Showing data collected by the Insights Operator . 4.8. Importing simple content access entitlements with Insights Operator Insights Operator periodically imports your simple content access entitlements from OpenShift Cluster Manager Hybrid Cloud Console and stores them in the etc-pki-entitlement secret in the openshift-config-managed namespace. Simple content access is a capability in Red Hat subscription tools which simplifies the behavior of the entitlement tooling. This feature makes it easier to consume the content provided by your Red Hat subscriptions without the complexity of configuring subscription tooling. Insights Operator imports simple content access entitlements every eight hours, but can be configured or disabled using the support secret in the openshift-config namespace. Note Simple content access must be enabled in Red Hat Subscription Management for the importing to function. Additional resources See About simple content access in the Red Hat Subscription Central documentation, for more information about simple content access. See Using Red Hat subscriptions in builds for more information about using simple content access entitlements in OpenShift Container Platform builds. 4.8.1. Configuring simple content access import interval You can configure how often the Insights Operator imports the simple content access entitlements by using the support secret in the openshift-config namespace. The entitlement import normally occurs every eight hours, but you can shorten this interval if you update your simple content access configuration in Red Hat Subscription Management. This procedure describes how to update the import interval to one hour. Prerequisites You are logged in to the OpenShift Container Platform web console as cluster-admin . Procedure Navigate to Workloads Secrets . Select the openshift-config project. Search for the support secret by using the Search by name field. If it does not exist, click Create Key/value secret to create it. If the secret exists: Click the Options menu , and then click Edit Secret . Click Add Key/Value . In the Key field, enter scaInterval . In the Value field, enter 1h . If the secret does not exist: Click Create Key/value secret . In the Secret name field, enter support . In the Key field, enter scaInterval . In the Value field, enter 1h . Click Create . Note The interval 1h can also be entered as 60m for 60 minutes. 4.8.2. Disabling simple content access import You can disable the importing of simple content access entitlements by using the support secret in the openshift-config namespace. Prerequisites You are logged in to the OpenShift Container Platform web console as cluster-admin . Procedure Navigate to Workloads Secrets . Select the openshift-config project. Search for the support secret using the Search by name field. If the secret exists: Click the Options menu , and then click Edit Secret . Click Add Key/Value . In the Key field, enter scaPullDisabled . In the Value field, enter true . If the secret does not exist: Click Create Key/value secret . In the Secret name field, enter support . In the Key field, enter scaPullDisabled . In the Value field, enter true . Click Create . The simple content access entitlement import is now disabled. Note To enable the simple content access import again, edit the support secret and delete the scaPullDisabled key. 4.8.3. Enabling a previously disabled simple content access import If the importing of simple content access entitlements is disabled, the Insights Operator does not import simple content access entitlements. You can change this behavior. Prerequisites You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. Procedure Navigate to Workloads Secrets . Select the openshift-config project. Search for the support secret by using the Search by name field. Click the Options menu , and then click Edit Secret . For the scaPullDisabled key, set the Value field to false . The simple content access entitlement import is now disabled. | [
"curl -G -k -H \"Authorization: Bearer USD(oc whoami -t)\" https://USD(oc get route prometheus-k8s-federate -n openshift-monitoring -o jsonpath=\"{.spec.host}\")/federate --data-urlencode 'match[]={__name__=~\"cluster:usage:.*\"}' --data-urlencode 'match[]={__name__=\"count:up0\"}' --data-urlencode 'match[]={__name__=\"count:up1\"}' --data-urlencode 'match[]={__name__=\"cluster_version\"}' --data-urlencode 'match[]={__name__=\"cluster_version_available_updates\"}' --data-urlencode 'match[]={__name__=\"cluster_version_capability\"}' --data-urlencode 'match[]={__name__=\"cluster_operator_up\"}' --data-urlencode 'match[]={__name__=\"cluster_operator_conditions\"}' --data-urlencode 'match[]={__name__=\"cluster_version_payload\"}' --data-urlencode 'match[]={__name__=\"cluster_installer\"}' --data-urlencode 'match[]={__name__=\"cluster_infrastructure_provider\"}' --data-urlencode 'match[]={__name__=\"cluster_feature_set\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_object_counts:sum\"}' --data-urlencode 'match[]={__name__=\"ALERTS\",alertstate=\"firing\"}' --data-urlencode 'match[]={__name__=\"code:apiserver_request_total:rate:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:capacity_cpu_cores:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:capacity_memory_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"workload:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"workload:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:virt_platform_nodes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:node_instance_type_count:sum\"}' --data-urlencode 'match[]={__name__=\"cnv:vmi_status_running:count\"}' --data-urlencode 'match[]={__name__=\"cluster:vmi_request_cpu_cores:sum\"}' --data-urlencode 'match[]={__name__=\"node_role_os_version_machine:cpu_capacity_cores:sum\"}' --data-urlencode 'match[]={__name__=\"node_role_os_version_machine:cpu_capacity_sockets:sum\"}' --data-urlencode 'match[]={__name__=\"subscription_sync_total\"}' --data-urlencode 'match[]={__name__=\"olm_resolution_duration_seconds\"}' --data-urlencode 'match[]={__name__=\"csv_succeeded\"}' --data-urlencode 'match[]={__name__=\"csv_abnormal\"}' --data-urlencode 'match[]={__name__=\"cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:kubelet_volume_stats_used_bytes:provisioner:sum\"}' --data-urlencode 'match[]={__name__=\"ceph_cluster_total_bytes\"}' --data-urlencode 'match[]={__name__=\"ceph_cluster_total_used_raw_bytes\"}' --data-urlencode 'match[]={__name__=\"ceph_health_status\"}' --data-urlencode 'match[]={__name__=\"job:ceph_osd_metadata:count\"}' --data-urlencode 'match[]={__name__=\"job:kube_pv:count\"}' --data-urlencode 'match[]={__name__=\"job:ceph_pools_iops:total\"}' --data-urlencode 'match[]={__name__=\"job:ceph_pools_iops_bytes:total\"}' --data-urlencode 'match[]={__name__=\"job:ceph_versions_running:count\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_total_unhealthy_buckets:sum\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_bucket_count:sum\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_total_object_count:sum\"}' --data-urlencode 'match[]={__name__=\"noobaa_accounts_num\"}' --data-urlencode 'match[]={__name__=\"noobaa_total_usage\"}' --data-urlencode 'match[]={__name__=\"console_url\"}' --data-urlencode 'match[]={__name__=\"cluster:ovnkube_master_egress_routing_via_host:max\"}' --data-urlencode 'match[]={__name__=\"cluster:network_attachment_definition_instances:max\"}' --data-urlencode 'match[]={__name__=\"cluster:network_attachment_definition_enabled_instance_up:max\"}' --data-urlencode 'match[]={__name__=\"cluster:ingress_controller_aws_nlb_active:sum\"}' --data-urlencode 'match[]={__name__=\"insightsclient_request_send_total\"}' --data-urlencode 'match[]={__name__=\"cam_app_workload_migrations\"}' --data-urlencode 'match[]={__name__=\"cluster:apiserver_current_inflight_requests:sum:max_over_time:2m\"}' --data-urlencode 'match[]={__name__=\"cluster:alertmanager_integrations:max\"}' --data-urlencode 'match[]={__name__=\"cluster:telemetry_selected_series:count\"}' --data-urlencode 'match[]={__name__=\"openshift:prometheus_tsdb_head_series:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:prometheus_tsdb_head_samples_appended_total:sum\"}' --data-urlencode 'match[]={__name__=\"monitoring:container_memory_working_set_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"namespace_job:scrape_series_added:topk3_sum1h\"}' --data-urlencode 'match[]={__name__=\"namespace_job:scrape_samples_post_metric_relabeling:topk3\"}' --data-urlencode 'match[]={__name__=\"monitoring:haproxy_server_http_responses_total:sum\"}' --data-urlencode 'match[]={__name__=\"rhmi_status\"}' --data-urlencode 'match[]={__name__=\"cluster_legacy_scheduler_policy\"}' --data-urlencode 'match[]={__name__=\"cluster_master_schedulable\"}' --data-urlencode 'match[]={__name__=\"che_workspace_status\"}' --data-urlencode 'match[]={__name__=\"che_workspace_started_total\"}' --data-urlencode 'match[]={__name__=\"che_workspace_failure_total\"}' --data-urlencode 'match[]={__name__=\"che_workspace_start_time_seconds_sum\"}' --data-urlencode 'match[]={__name__=\"che_workspace_start_time_seconds_count\"}' --data-urlencode 'match[]={__name__=\"cco_credentials_mode\"}' --data-urlencode 'match[]={__name__=\"cluster:kube_persistentvolume_plugin_type_counts:sum\"}' --data-urlencode 'match[]={__name__=\"visual_web_terminal_sessions_total\"}' --data-urlencode 'match[]={__name__=\"acm_managed_cluster_info\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_vcenter_info:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_esxi_version_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_node_hw_version_total:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:build_by_strategy:sum\"}' --data-urlencode 'match[]={__name__=\"rhods_aggregate_availability\"}' --data-urlencode 'match[]={__name__=\"rhods_total_users\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_disk_wal_fsync_duration_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_mvcc_db_total_size_in_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_network_peer_round_trip_time_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_mvcc_db_total_size_in_use_in_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_disk_backend_commit_duration_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_storage_types\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_strategies\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_agent_strategies\"}' --data-urlencode 'match[]={__name__=\"appsvcs:cores_by_product:sum\"}' --data-urlencode 'match[]={__name__=\"nto_custom_profiles:count\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_configmap\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_secret\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_mount_failures_total\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_mount_requests_total\"}' --data-urlencode 'match[]={__name__=\"cluster:velero_backup_total:max\"}' --data-urlencode 'match[]={__name__=\"cluster:velero_restore_total:max\"}' --data-urlencode 'match[]={__name__=\"eo_es_storage_info\"}' --data-urlencode 'match[]={__name__=\"eo_es_redundancy_policy_info\"}' --data-urlencode 'match[]={__name__=\"eo_es_defined_delete_namespaces_total\"}' --data-urlencode 'match[]={__name__=\"eo_es_misconfigured_memory_resources_info\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_data_nodes_total:max\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_documents_created_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_documents_deleted_total:sum\"}' --data-urlencode 'match[]={__name__=\"pod:eo_es_shards_total:max\"}' --data-urlencode 'match[]={__name__=\"eo_es_cluster_management_state_info\"}' --data-urlencode 'match[]={__name__=\"imageregistry:imagestreamtags_count:sum\"}' --data-urlencode 'match[]={__name__=\"imageregistry:operations_count:sum\"}' --data-urlencode 'match[]={__name__=\"log_logging_info\"}' --data-urlencode 'match[]={__name__=\"log_collector_error_count_total\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_pipeline_info\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_input_info\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_output_info\"}' --data-urlencode 'match[]={__name__=\"cluster:log_collected_bytes_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:log_logged_bytes_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:kata_monitor_running_shim_count:sum\"}'",
"INSIGHTS_OPERATOR_POD=USD(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running)",
"oc cp openshift-insights/USDINSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data",
"oc extract secret/pull-secret -n openshift-config --to=.",
"\"cloud.openshift.com\":{\"auth\":\"<hash>\",\"email\":\"<email_address>\"}",
"oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1",
"oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \" <your_token> \", \"email\": \" <email_address> \" } } }",
"oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' > pull-secret",
"cp pull-secret pull-secret-backup",
"set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull-secret",
"oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running",
"oc cp openshift-insights/<insights_operator_pod_name>:/var/lib/insights-operator ./insights-data 1",
"{ \"name\": \"clusterconfig/authentication\", \"duration_in_ms\": 730, 1 \"records_count\": 1, \"errors\": null, \"panic\": null }",
"apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job annotations: config.openshift.io/inject-proxy: insights-operator spec: backoffLimit: 6 ttlSecondsAfterFinished: 600 template: spec: restartPolicy: OnFailure serviceAccountName: operator nodeSelector: beta.kubernetes.io/os: linux node-role.kubernetes.io/master: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 900 - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 900 volumes: - name: snapshots emptyDir: {} - name: service-ca-bundle configMap: name: service-ca-bundle optional: true initContainers: - name: insights-operator image: quay.io/openshift/origin-insights-operator:latest terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - name: snapshots mountPath: /var/lib/insights-operator - name: service-ca-bundle mountPath: /var/run/configmaps/service-ca-bundle readOnly: true ports: - containerPort: 8443 name: https resources: requests: cpu: 10m memory: 70Mi args: - gather - -v=4 - --config=/etc/insights-operator/server.yaml containers: - name: sleepy image: quay.io/openshift/origin-base:latest args: - /bin/sh - -c - sleep 10m volumeMounts: [{name: snapshots, mountPath: /var/lib/insights-operator}]",
"oc get -n openshift-insights deployment insights-operator -o yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: insights-operator namespace: openshift-insights spec: template: spec: containers: - args: image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1",
"apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job spec: template: spec: initContainers: - name: insights-operator image: image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1 terminationMessagePolicy: FallbackToLogsOnError volumeMounts:",
"oc apply -n openshift-insights -f gather-job.yaml",
"oc describe -n openshift-insights job/insights-operator-job",
"Name: insights-operator-job Namespace: openshift-insights Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 7m18s job-controller Created pod: insights-operator-job-<your_job>",
"oc logs -n openshift-insights insights-operator-job-<your_job> insights-operator",
"I0407 11:55:38.192084 1 diskrecorder.go:34] Wrote 108 records to disk in 33ms",
"oc cp openshift-insights/insights-operator-job- <your_job> :/var/lib/insights-operator ./insights-data",
"oc delete -n openshift-insights job insights-operator-job",
"oc extract secret/pull-secret -n openshift-config --to=.",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \" <your_token> \", \"email\": \"[email protected]\" } }",
"curl -v -H \"User-Agent: insights-operator/one10time200gather184a34f6a168926d93c330 cluster/ <cluster_id> \" -H \"Authorization: Bearer <your_token> \" -F \"upload=@ <path_to_archive> ; type=application/vnd.redhat.openshift.periodic+tar\" https://console.redhat.com/api/ingress/v1/upload",
"* Connection #0 to host console.redhat.com left intact {\"request_id\":\"393a7cf1093e434ea8dd4ab3eb28884c\",\"upload\":{\"account_number\":\"6274079\"}}%"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/support/remote-health-monitoring-with-connected-clusters |
8.12. Guest Virtual Machine Power Management | 8.12. Guest Virtual Machine Power Management It is possible to forcibly enable or disable BIOS advertisements to the guest virtual machine's operating system by changing the following parameters in the Domain XML for Libvirt: The element pm enables ('yes') or disables ('no') BIOS support for S3 (suspend-to-disk) and S4 (suspend-to-mem) ACPI sleep states. If nothing is specified, then the hypervisor will be left with its default value. | [
"<pm> <suspend-to-disk enabled='no'/> <suspend-to-mem enabled='yes'/> </pm>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-s3-s4 |
Chapter 10. Distributed tracing | Chapter 10. Distributed tracing Distributed tracing allows you to track the progress of transactions between applications in a distributed system. In a microservices architecture, tracing tracks the progress of transactions between services. Trace data is useful for monitoring application performance and investigating issues with target systems and end-user applications. In AMQ Streams, tracing facilitates the end-to-end tracking of messages: from source systems to Kafka, and then from Kafka to target systems and applications. It complements the metrics that are available to view in Grafana dashboards , as well as the component loggers. How AMQ Streams supports tracing Support for tracing is built in to the following components: Kafka Connect (including Kafka Connect with Source2Image support) MirrorMaker MirrorMaker 2.0 AMQ Streams Kafka Bridge You enable and configure tracing for these components using template configuration properties in their custom resources. To enable tracing in Kafka producers, consumers, and Kafka Streams API applications, you instrument application code using the OpenTracing Apache Kafka Client Instrumentation library (included with AMQ Streams). When instrumented, clients generate trace data; for example, when producing messages or writing offsets to the log. Traces are sampled according to a sampling strategy and then visualized in the Jaeger user interface. Note Tracing is not supported for Kafka brokers. Setting up tracing for applications and systems beyond AMQ Streams is outside the scope of this chapter. To learn more about this subject, search for "inject and extract" in the OpenTracing documentation . Outline of procedures To set up tracing for AMQ Streams, follow these procedures in order: Set up tracing for clients: Initialize a Jaeger tracer for Kafka clients Instrument clients with tracers: Instrument producers and consumers for tracing Instrument Kafka Streams applications for tracing Set up tracing for MirrorMaker, Kafka Connect, and the Kafka Bridge Prerequisites The Jaeger backend components are deployed to your OpenShift cluster. For deployment instructions, see the Jaeger deployment documentation . 10.1. Overview of OpenTracing and Jaeger AMQ Streams uses the OpenTracing and Jaeger projects. OpenTracing is an API specification that is independent from the tracing or monitoring system. The OpenTracing APIs are used to instrument application code Instrumented applications generate traces for individual transactions across the distributed system Traces are composed of spans that define specific units of work over time Jaeger is a tracing system for microservices-based distributed systems. Jaeger implements the OpenTracing APIs and provides client libraries for instrumentation The Jaeger user interface allows you to query, filter, and analyze trace data Additional resources OpenTracing Jaeger 10.2. Setting up tracing for Kafka clients Initialize a Jaeger tracer to instrument your client applications for distributed tracing. 10.2.1. Initializing a Jaeger tracer for Kafka clients Configure and initialize a Jaeger tracer using a set of tracing environment variables . Procedure In each client application: Add Maven dependencies for Jaeger to the pom.xml file for the client application: <dependency> <groupId>io.jaegertracing</groupId> <artifactId>jaeger-client</artifactId> <version>1.1.0.redhat-00002</version> </dependency> Define the configuration of the Jaeger tracer using the tracing environment variables . Create the Jaeger tracer from the environment variables that you defined in step two: Tracer tracer = Configuration.fromEnv().getTracer(); Note For alternative ways to initialize a Jaeger tracer, see the Java OpenTracing library documentation. Register the Jaeger tracer as a global tracer: GlobalTracer.register(tracer); A Jaeger tracer is now initialized for the client application to use. 10.2.2. Environment variables for tracing Use these environment variables when configuring a Jaeger tracer for Kafka clients. Note The tracing environment variables are part of the Jaeger project and are subject to change. For the latest environment variables, see the Jaeger documentation . Property Required Description JAEGER_SERVICE_NAME Yes The name of the Jaeger tracer service. JAEGER_AGENT_HOST No The hostname for communicating with the jaeger-agent through the User Datagram Protocol (UDP). JAEGER_AGENT_PORT No The port used for communicating with the jaeger-agent through UDP. JAEGER_ENDPOINT No The traces endpoint. Only define this variable if the client application will bypass the jaeger-agent and connect directly to the jaeger-collector . JAEGER_AUTH_TOKEN No The authentication token to send to the endpoint as a bearer token. JAEGER_USER No The username to send to the endpoint if using basic authentication. JAEGER_PASSWORD No The password to send to the endpoint if using basic authentication. JAEGER_PROPAGATION No A comma-separated list of formats to use for propagating the trace context. Defaults to the standard Jaeger format. Valid values are jaeger , b3 , and w3c . JAEGER_REPORTER_LOG_SPANS No Indicates whether the reporter should also log the spans. JAEGER_REPORTER_MAX_QUEUE_SIZE No The reporter's maximum queue size. JAEGER_REPORTER_FLUSH_INTERVAL No The reporter's flush interval, in ms. Defines how frequently the Jaeger reporter flushes span batches. JAEGER_SAMPLER_TYPE No The sampling strategy to use for client traces: Constant Probabilistic Rate Limiting Remote (the default) To sample all traces, use the Constant sampling strategy with a parameter of 1. For more information, see the Jaeger documentation . JAEGER_SAMPLER_PARAM No The sampler parameter (number). JAEGER_SAMPLER_MANAGER_HOST_PORT No The hostname and port to use if a Remote sampling strategy is selected. JAEGER_TAGS No A comma-separated list of tracer-level tags that are added to all reported spans. The value can also refer to an environment variable using the format USD{envVarName:default} . :default is optional and identifies a value to use if the environment variable cannot be found. Additional resources Section 10.2.1, "Initializing a Jaeger tracer for Kafka clients" 10.3. Instrumenting Kafka clients with tracers Instrument Kafka producer and consumer clients, and Kafka Streams API applications for distributed tracing. 10.3.1. Instrumenting producers and consumers for tracing Use a Decorator pattern or Interceptors to instrument your Java producer and consumer application code for tracing. Procedure In the application code of each producer and consumer application: Add the Maven dependency for OpenTracing to the producer or consumer's pom.xml file. <dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-client</artifactId> <version>0.1.12.redhat-00001</version> </dependency> Instrument your client application code using either a Decorator pattern or Interceptors. To use a Decorator pattern: // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer: TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer); // Send: tracingProducer.send(...); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer); // Subscribe: tracingConsumer.subscribe(Collections.singletonList("messages")); // Get messages: ConsumerRecords<Integer, String> records = tracingConsumer.poll(1000); // Retrieve SpanContext from polled record (consumer side): ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); To use Interceptors: // Register the tracer with GlobalTracer: GlobalTracer.register(tracer); // Add the TracingProducerInterceptor to the sender properties: senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Send: producer.send(...); // Add the TracingConsumerInterceptor to the consumer properties: consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Subscribe: consumer.subscribe(Collections.singletonList("messages")); // Get messages: ConsumerRecords<Integer, String> records = consumer.poll(1000); // Retrieve the SpanContext from a polled message (consumer side): ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); 10.3.1.1. Custom span names in a Decorator pattern A span is a logical unit of work in Jaeger, with an operation name, start time, and duration. To use a Decorator pattern to instrument your producer and consumer applications, define custom span names by passing a BiFunction object as an additional argument when creating the TracingKafkaProducer and TracingKafkaConsumer objects. The OpenTracing Apache Kafka Client Instrumentation library includes several built-in span names. Example: Using custom span names to instrument client application code in a Decorator pattern // Create a BiFunction for the KafkaProducer that operates on (String operationName, ProducerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ProducerRecord, String> producerSpanNameProvider = (operationName, producerRecord) -> "CUSTOM_PRODUCER_NAME"; // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer, producerSpanNameProvider); // Spans created by the tracingProducer will now have "CUSTOM_PRODUCER_NAME" as the span name. // Create a BiFunction for the KafkaConsumer that operates on (String operationName, ConsumerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ConsumerRecord, String> consumerSpanNameProvider = (operationName, consumerRecord) -> operationName.toUpperCase(); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer, passing in the consumerSpanNameProvider BiFunction: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer, consumerSpanNameProvider); // Spans created by the tracingConsumer will have the operation name as the span name, in upper-case. // "receive" -> "RECEIVE" 10.3.1.2. Built-in span names When defining custom span names, you can use the following BiFunctions in the ClientSpanNameProvider class. If no spanNameProvider is specified, CONSUMER_OPERATION_NAME and PRODUCER_OPERATION_NAME are used. BiFunction Description CONSUMER_OPERATION_NAME, PRODUCER_OPERATION_NAME Returns the operationName as the span name: "receive" for consumers and "send" for producers. CONSUMER_PREFIXED_OPERATION_NAME(String prefix), PRODUCER_PREFIXED_OPERATION_NAME(String prefix) Returns a String concatenation of prefix and operationName . CONSUMER_TOPIC, PRODUCER_TOPIC Returns the name of the topic that the message was sent to or retrieved from in the format (record.topic()) . PREFIXED_CONSUMER_TOPIC(String prefix), PREFIXED_PRODUCER_TOPIC(String prefix) Returns a String concatenation of prefix and the topic name in the format (record.topic()) . CONSUMER_OPERATION_NAME_TOPIC, PRODUCER_OPERATION_NAME_TOPIC Returns the operation name and the topic name: "operationName - record.topic()" . CONSUMER_PREFIXED_OPERATION_NAME_TOPIC(String prefix), PRODUCER_PREFIXED_OPERATION_NAME_TOPIC(String prefix) Returns a String concatenation of prefix and "operationName - record.topic()" . 10.3.2. Instrumenting Kafka Streams applications for tracing This section describes how to instrument Kafka Streams API applications for distributed tracing. Procedure In each Kafka Streams API application: Add the opentracing-kafka-streams dependency to the pom.xml file for your Kafka Streams API application: <dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-streams</artifactId> <version>0.1.12.redhat-00001</version> </dependency> Create an instance of the TracingKafkaClientSupplier supplier interface: KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer); Provide the supplier interface to KafkaStreams : KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start(); 10.4. Setting up tracing for MirrorMaker, Kafka Connect, and the Kafka Bridge Distributed tracing is supported for MirrorMaker, MirrorMaker 2.0, Kafka Connect (including Kafka Connect with Source2Image support), and the AMQ Streams Kafka Bridge. Tracing in MirrorMaker and MirrorMaker 2.0 For MirrorMaker and MirrorMaker 2.0, messages are traced from the source cluster to the target cluster. The trace data records messages entering and leaving the MirrorMaker or MirrorMaker 2.0 component. Tracing in Kafka Connect Only messages produced and consumed by Kafka Connect itself are traced. To trace messages sent between Kafka Connect and external systems, you must configure tracing in the connectors for those systems. For more information, see Section 2.2.1, "Configuring Kafka Connect" . Tracing in the Kafka Bridge Messages produced and consumed by the Kafka Bridge are traced. Incoming HTTP requests from client applications to send and receive messages through the Kafka Bridge are also traced. To have end-to-end tracing, you must configure tracing in your HTTP clients. 10.4.1. Enabling tracing in MirrorMaker, Kafka Connect, and Kafka Bridge resources Update the configuration of KafkaMirrorMaker , KafkaMirrorMaker2 , KafkaConnect , KafkaConnectS2I , and KafkaBridge custom resources to specify and configure a Jaeger tracer service for each resource. Updating a tracing-enabled resource in your OpenShift cluster triggers two events: Interceptor classes are updated in the integrated consumers and producers in MirrorMaker, MirrorMaker 2.0, Kafka Connect, or the AMQ Streams Kafka Bridge. For MirrorMaker, MirrorMaker 2.0, and Kafka Connect, the tracing agent initializes a Jaeger tracer based on the tracing configuration defined in the resource. For the Kafka Bridge, a Jaeger tracer based on the tracing configuration defined in the resource is initialized by the Kafka Bridge itself. Procedure Perform these steps for each KafkaMirrorMaker , KafkaMirrorMaker2 , KafkaConnect , KafkaConnectS2I , and KafkaBridge resource. In the spec.template property, configure the Jaeger tracer service. For example: Jaeger tracer configuration for Kafka Connect apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect-cluster spec: #... template: connectContainer: 1 env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: "6831" tracing: 2 type: jaeger #... Jaeger tracer configuration for MirrorMaker apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: #... template: mirrorMakerContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: "6831" tracing: type: jaeger #... Jaeger tracer configuration for MirrorMaker 2.0 apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: #... template: connectContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: "6831" tracing: type: jaeger #... Jaeger tracer configuration for the Kafka Bridge apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaBridge metadata: name: my-bridge spec: #... template: bridgeContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: "6831" tracing: type: jaeger #... 1 Use the tracing environment variables as template configuration properties. 2 Set the spec.tracing.type property to jaeger . Create or update the resource: oc apply -f your-file Additional resources Section B.60, " ContainerTemplate schema reference" Section 2.6, "Customizing OpenShift resources" | [
"<dependency> <groupId>io.jaegertracing</groupId> <artifactId>jaeger-client</artifactId> <version>1.1.0.redhat-00002</version> </dependency>",
"Tracer tracer = Configuration.fromEnv().getTracer();",
"GlobalTracer.register(tracer);",
"<dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-client</artifactId> <version>0.1.12.redhat-00001</version> </dependency>",
"// Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer: TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer); // Send: tracingProducer.send(...); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer); // Subscribe: tracingConsumer.subscribe(Collections.singletonList(\"messages\")); // Get messages: ConsumerRecords<Integer, String> records = tracingConsumer.poll(1000); // Retrieve SpanContext from polled record (consumer side): ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);",
"// Register the tracer with GlobalTracer: GlobalTracer.register(tracer); // Add the TracingProducerInterceptor to the sender properties: senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Send: producer.send(...); // Add the TracingConsumerInterceptor to the consumer properties: consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Subscribe: consumer.subscribe(Collections.singletonList(\"messages\")); // Get messages: ConsumerRecords<Integer, String> records = consumer.poll(1000); // Retrieve the SpanContext from a polled message (consumer side): ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);",
"// Create a BiFunction for the KafkaProducer that operates on (String operationName, ProducerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ProducerRecord, String> producerSpanNameProvider = (operationName, producerRecord) -> \"CUSTOM_PRODUCER_NAME\"; // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer, producerSpanNameProvider); // Spans created by the tracingProducer will now have \"CUSTOM_PRODUCER_NAME\" as the span name. // Create a BiFunction for the KafkaConsumer that operates on (String operationName, ConsumerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ConsumerRecord, String> consumerSpanNameProvider = (operationName, consumerRecord) -> operationName.toUpperCase(); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer, passing in the consumerSpanNameProvider BiFunction: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer, consumerSpanNameProvider); // Spans created by the tracingConsumer will have the operation name as the span name, in upper-case. // \"receive\" -> \"RECEIVE\"",
"<dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-streams</artifactId> <version>0.1.12.redhat-00001</version> </dependency>",
"KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer);",
"KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start();",
"apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect-cluster spec: # template: connectContainer: 1 env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: \"6831\" tracing: 2 type: jaeger #",
"apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # template: mirrorMakerContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: \"6831\" tracing: type: jaeger #",
"apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: # template: connectContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: \"6831\" tracing: type: jaeger #",
"apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaBridge metadata: name: my-bridge spec: # template: bridgeContainer: env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: \"6831\" tracing: type: jaeger #",
"apply -f your-file"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_amq_streams_on_openshift/assembly-distributed-tracing-str |
Chapter 35. Deployment and Tools | Chapter 35. Deployment and Tools systemd component, BZ# 1178848 The systemd service cannot set cgroup properties on cgroup trees that are mounted as read-only. Consequently, the following error message can ocasionally appear in the logs: You can ignore this problem, as it should not have any significant effect on you system. systemd component, BZ# 978955 When attempting to start, stop, or restart a service or unit using the systemctl [start|stop|restart] NAME command, no message is displayed to inform the user whether the action has been successful. subscription-manager component, BZ# 1166333 The Assamese (as-IN), Punjabi (pa-IN), and Korean (ko-KR) translations of subscription-manager 's user interface are incomplete. As a consequence, users of subscription-manager running in one these locales may see labels in English rather than the configured language. systemtap component, BZ#1184374 Certain functions in the kernel are not probed as expected. To work around this issue, try to probe by a statement or by a related function. systemtap component, BZ#1183038 Certain parameters or functions cannot be accessible within function probes. As a consequence, the USDparameter accesses can be rejected. To work around this issue, activate the systemtap prologue-searching heuristics. | [
"Failed to reset devices.list on /machine.slice: Invalid argument"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/known-issues-deployment_and_tools |
Chapter 4. Distributed tracing platform (Tempo) | Chapter 4. Distributed tracing platform (Tempo) 4.1. Installing the distributed tracing platform (Tempo) Installing the distributed tracing platform (Tempo) involves the following steps: Setting up supported object storage. Installing the Tempo Operator. Creating a secret for the object storage credentials. Creating a namespace for a TempoStack instance. Creating a TempoStack custom resource to deploy at least one TempoStack instance. 4.1.1. Object storage setup You can use the following configuration parameters when setting up a supported object storage. Table 4.1. Required secret parameters Storage provider Secret parameters Red Hat OpenShift Data Foundation name: tempostack-dev-odf # example bucket: <bucket_name> # requires an ObjectBucketClaim endpoint: https://s3.openshift-storage.svc access_key_id: <data_foundation_access_key_id> access_key_secret: <data_foundation_access_key_secret> MinIO See MinIO Operator . name: tempostack-dev-minio # example bucket: <minio_bucket_name> # MinIO documentation endpoint: <minio_bucket_endpoint> access_key_id: <minio_access_key_id> access_key_secret: <minio_access_key_secret> Amazon S3 name: tempostack-dev-s3 # example bucket: <s3_bucket_name> # Amazon S3 documentation endpoint: <s3_bucket_endpoint> access_key_id: <s3_access_key_id> access_key_secret: <s3_access_key_secret> Microsoft Azure Blob Storage name: tempostack-dev-azure # example container: <azure_blob_storage_container_name> # Microsoft Azure documentation account_name: <azure_blob_storage_account_name> account_key: <azure_blob_storage_account_key> Google Cloud Storage on Google Cloud Platform (GCP) name: tempostack-dev-gcs # example bucketname: <google_cloud_storage_bucket_name> # requires a bucket created in a GCP project key.json: <path/to/key.json> # requires a service account in the bucket's GCP project for GCP authentication 4.1.2. Installing the distributed tracing platform (Tempo) from the web console You can install the distributed tracing platform (Tempo) from the Administrator view of the web console. Prerequisites You are logged in to the OpenShift Container Platform web console as a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role. You are using a supported provider of object storage: Red Hat OpenShift Data Foundation , MinIO , Amazon S3 , Azure Blob Storage , Google Cloud Storage . Procedure Install the Tempo Operator: Go to Operators OperatorHub and search for Tempo Operator . Select the Tempo Operator that is OpenShift Operator for Tempo Install Install View Operator . Important This installs the Operator with the default presets: Update channel stable Installation mode All namespaces on the cluster Installed Namespace openshift-tempo-operator Update approval Automatic In the Details tab of the page of the installed Operator, under ClusterServiceVersion details , verify that the installation Status is Succeeded . Create a project of your choice for the TempoStack instance that you will create in a subsequent step: go to Home Projects Create Project . In the project that you created for the TempoStack instance, create a secret for your object storage bucket: go to Workloads Secrets Create From YAML . Example secret for Amazon S3 and MinIO storage apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque Create a TempoStack instance. Note You can create multiple TempoStack instances in separate projects on the same cluster. Go to Operators Installed Operators . Select TempoStack Create TempoStack YAML view . In the YAML view , customize the TempoStack custom resource (CR): apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: secret: name: <secret-name> 1 type: <secret-provider> 2 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route 1 The value of the name in the metadata of the secret. 2 The accepted values are azure for Azure Blob Storage; gcs for Google Cloud Storage; and s3 for Amazon S3, MinIO, or Red Hat OpenShift Data Foundation. Example of a TempoStack CR for AWS S3 and MinIO storage apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route The stack deployed in this example is configured to receive Jaeger Thrift over HTTP and OpenTelemetry Protocol (OTLP), which permits visualizing the data with the Jaeger UI. Select Create . Verification Use the Project: dropdown list to select the project of the TempoStack instance. Go to Operators Installed Operators to verify that the Status of the TempoStack instance is Condition: Ready . Go to Workloads Pods to verify that all the component pods of the TempoStack instance are running. Access the Tempo console: Go to Networking Routes and Ctrl + F to search for tempo . In the Location column, open the URL to access the Tempo console. Select Log In With OpenShift to use your cluster administrator credentials for the web console. Note The Tempo console initially shows no trace data following the Tempo console installation. 4.1.3. Installing the distributed tracing platform (Tempo) by using the CLI You can install the distributed tracing platform (Tempo) from the command line. Prerequisites An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. Tip Ensure that your OpenShift CLI ( oc ) version is up to date and matches your OpenShift Container Platform version. Run oc login : USD oc login --username=<your_username> You are using a supported provider of object storage: Red Hat OpenShift Data Foundation , MinIO , Amazon S3 , Azure Blob Storage , Google Cloud Storage . Procedure Install the Tempo Operator: Create a project for the Tempo Operator by running the following command: USD oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-tempo-operator openshift.io/cluster-monitoring: "true" name: openshift-tempo-operator EOF Create an Operator group by running the following command: USD oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-tempo-operator namespace: openshift-tempo-operator spec: upgradeStrategy: Default EOF Create a subscription by running the following command: USD oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: tempo-product namespace: openshift-tempo-operator spec: channel: stable installPlanApproval: Automatic name: tempo-product source: redhat-operators sourceNamespace: openshift-marketplace EOF Check the operator status by running the following command: USD oc get csv -n openshift-tempo-operator Run the following command to create a project of your choice for the TempoStack instance that you will create in a subsequent step: USD oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempostack_instance> EOF In the project that you created for the TempoStack instance, create a secret for your object storage bucket by running the following command: USD oc apply -f - << EOF <object_storage_secret> EOF Example secret for Amazon S3 and MinIO storage apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque Create a TempoStack instance in the project that you created for the TempoStack instance. Note You can create multiple TempoStack instances in separate projects on the same cluster. Customize the TempoStack custom resource (CR): apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: secret: name: <secret-name> 1 type: <secret-provider> 2 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route 1 The value of the name in the metadata of the secret. 2 The accepted values are azure for Azure Blob Storage; gcs for Google Cloud Storage; and s3 for Amazon S3, MinIO, or Red Hat OpenShift Data Foundation. Example of a TempoStack CR for AWS S3 and MinIO storage apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: project_of_tempostack_instance spec: storageSize: 1Gi storage: secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route The stack deployed in this example is configured to receive Jaeger Thrift over HTTP and OpenTelemetry Protocol (OTLP), which permits visualizing the data with the Jaeger UI. Apply the customized CR by running the following command. USD oc apply -f - << EOF <TempoStack_custom_resource> EOF Verification Verify that the status of all TempoStack components is Running and the conditions are type: Ready by running the following command: USD oc get tempostacks.tempo.grafana.com simplest -o yaml Verify that all the TempoStack component pods are running by running the following command: USD oc get pods Access the Tempo console: Query the route details by running the following command: USD export TEMPO_URL=USD(oc get route -n <control_plane_namespace> tempo -o jsonpath='{.spec.host}') Open https://<route_from_previous_step> in a web browser. Log in using your cluster administrator credentials for the web console. Note The Tempo console initially shows no trace data following the Tempo console installation. 4.1.4. Additional resources Creating a cluster admin OperatorHub.io Accessing the web console Installing from OperatorHub using the web console Creating applications from installed Operators Getting started with the OpenShift CLI 4.2. Configuring and deploying the distributed tracing platform (Tempo) The Tempo Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the distributed tracing platform (Tempo) resources. You can install the default configuration or modify the file. 4.2.1. Customizing your deployment For information about configuring the back-end storage, see Understanding persistent storage and the appropriate configuration topic for your chosen storage option. 4.2.1.1. Distributed tracing default configuration options The Tempo custom resource (CR) defines the architecture and settings to be used when creating the distributed tracing platform (Tempo) resources. You can modify these parameters to customize your distributed tracing platform (Tempo) implementation to your business needs. Example of a generic Tempo YAML file apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: name spec: storage: {} resources: {} storageSize: 200M replicationFactor: 1 retention: {} template: distributor:{} ingester: {} compactor: {} querier: {} queryFrontend: {} gateway: {} Table 4.2. Tempo parameters Parameter Description Values Default value API version to use when creating the object. tempo.grafana.com/v1alpha1 tempo.grafana.com/v1alpha1 Defines the kind of Kubernetes object to create. tempo Data that uniquely identifies the object, including a name string, UID , and optional namespace . OpenShift Container Platform automatically generates the UID and completes the namespace with the name of the project where the object is created. Name for the object. Name of your TempoStack instance. tempo-all-in-one-inmemory Specification for the object to be created. Contains all of the configuration parameters for your TempoStack instance. When a common definition for all Tempo components is required, it is defined under the spec node. When the definition relates to an individual component, it is placed under the spec/template/<component> node. N/A Resources assigned to the TempoStack. Storage size for ingester PVCs. Configuration for the replication factor. Configuration options for retention of traces. Configuration options that define the storage. All storage-related options must be placed under storage and not under the allInOne or other component options. Configuration options for the Tempo distributor . Configuration options for the Tempo ingester . Configuration options for the Tempo compactor . Configuration options for the Tempo querier . Configuration options for the Tempo query-frontend . Configuration options for the Tempo gateway . Minimum required configuration The following is the required minimum for creating a distributed tracing platform (Tempo) deployment with the default settings: apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest spec: storage: 1 secret: name: minio type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: enabled: true ingress: type: route 1 This section specifies the deployed object storage back end, which requires a created secret with credentials for access to the object storage. 4.2.1.2. The distributed tracing platform (Tempo) storage configuration You can configure object storage for the distributed tracing platform (Tempo) in the TempoStack custom resource under spec.storage . You can choose from among several storage providers that are supported. Table 4.3. General storage parameters used by the Tempo Operator to define distributed tracing storage Parameter Description Values Default value Type of storage to use for the deployment. memory . Memory storage is only appropriate for development, testing, demonstrations, and proof of concept environments because the data does not persist when the pod is shut down. memory Name of the secret that contains the credentials for the set object storage type. N/A CA is the name of a ConfigMap object containing a CA certificate. Table 4.4. Required secret parameters Storage provider Secret parameters Red Hat OpenShift Data Foundation name: tempostack-dev-odf # example bucket: <bucket_name> # requires an ObjectBucketClaim endpoint: https://s3.openshift-storage.svc access_key_id: <data_foundation_access_key_id> access_key_secret: <data_foundation_access_key_secret> MinIO See MinIO Operator . name: tempostack-dev-minio # example bucket: <minio_bucket_name> # MinIO documentation endpoint: <minio_bucket_endpoint> access_key_id: <minio_access_key_id> access_key_secret: <minio_access_key_secret> Amazon S3 name: tempostack-dev-s3 # example bucket: <s3_bucket_name> # Amazon S3 documentation endpoint: <s3_bucket_endpoint> access_key_id: <s3_access_key_id> access_key_secret: <s3_access_key_secret> Microsoft Azure Blob Storage name: tempostack-dev-azure # example container: <azure_blob_storage_container_name> # Microsoft Azure documentation account_name: <azure_blob_storage_account_name> account_key: <azure_blob_storage_account_key> Google Cloud Storage on Google Cloud Platform (GCP) name: tempostack-dev-gcs # example bucketname: <google_cloud_storage_bucket_name> # requires a bucket created in a GCP project key.json: <path/to/key.json> # requires a service account in the bucket's GCP project for GCP authentication 4.2.1.3. Query configuration options Two components of the distributed tracing platform (Tempo), the querier and query frontend, manage queries. You can configure both of these components. The querier component finds the requested trace ID in the ingesters or back-end storage. Depending on the set parameters, the querier component can query both the ingesters and pull bloom or indexes from the back end to search blocks in object storage. The querier component exposes an HTTP endpoint at GET /querier/api/traces/<traceID> , but it is not expected to be used directly. Queries must be sent to the query frontend. Table 4.5. Configuration parameters for the querier component Parameter Description Values nodeSelector The simple form of the node-selection constraint. type: object replicas The number of replicas to be created for the component. type: integer; format: int32 tolerations Component-specific pod tolerations. type: array The query frontend component is responsible for sharding the search space for an incoming query. The query frontend exposes traces via a simple HTTP endpoint: GET /api/traces/<traceID> . Internally, the query frontend component splits the blockID space into a configurable number of shards and then queues these requests. The querier component connects to the query frontend component via a streaming gRPC connection to process these sharded queries. Table 4.6. Configuration parameters for the query frontend component Parameter Description Values component Configuration of the query frontend component. type: object component.nodeSelector The simple form of the node selection constraint. type: object component.replicas The number of replicas to be created for the query frontend component. type: integer; format: int32 component.tolerations Pod tolerations specific to the query frontend component. type: array jaegerQuery The options specific to the Jaeger Query component. type: object jaegerQuery.enabled When enabled , creates the Jaeger Query component, jaegerQuery . type: boolean jaegerQuery.ingress The options for the Jaeger Query ingress. type: object jaegerQuery.ingress.annotations The annotations of the ingress object. type: object jaegerQuery.ingress.host The hostname of the ingress object. type: string jaegerQuery.ingress.ingressClassName The name of an IngressClass cluster resource. Defines which ingress controller serves this ingress resource. type: string jaegerQuery.ingress.route The options for the OpenShift route. type: object jaegerQuery.ingress.route.termination The termination type. The default is edge . type: string (enum: insecure, edge, passthrough, reencrypt) jaegerQuery.ingress.type The type of ingress for the Jaeger Query UI. The supported types are ingress , route , and none . type: string (enum: ingress, route) jaegerQuery.monitorTab The monitor tab configuration. type: object jaegerQuery.monitorTab.enabled Enables the monitor tab in the Jaeger console. The PrometheusEndpoint must be configured. type: boolean jaegerQuery.monitorTab.prometheusEndpoint The endpoint to the Prometheus instance that contains the span rate, error, and duration (RED) metrics. For example, https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 . type: string Example configuration of the query frontend component in a TempoStack CR apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest spec: storage: secret: name: minio type: s3 storageSize: 200M resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route 4.2.1.3.1. Additional resources Understanding taints and tolerations 4.2.1.4. Configuration of the monitor tab in Jaeger UI Trace data contains rich information, and the data is normalized across instrumented languages and frameworks. Therefore, request rate, error, and duration (RED) metrics can be extracted from traces. The metrics can be visualized in Jaeger console in the Monitor tab. The metrics are derived from spans in the OpenTelemetry Collector that are scraped from the Collector by the Prometheus deployed in the user-workload monitoring stack. The Jaeger UI queries these metrics from the Prometheus endpoint and visualizes them. 4.2.1.4.1. OpenTelemetry Collector configuration The OpenTelemetry Collector requires configuration of the spanmetrics connector that derives metrics from traces and exports the metrics in the Prometheus format. OpenTelemetry Collector custom resource for span RED kind: OpenTelemetryCollector apiVersion: opentelemetry.io/v1alpha1 metadata: name: otel spec: mode: deployment observability: metrics: enableMetrics: true 1 config: | connectors: spanmetrics: 2 metrics_flush_interval: 15s receivers: otlp: 3 protocols: grpc: http: exporters: prometheus: 4 endpoint: 0.0.0.0:8889 add_metric_suffixes: false resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped otlp: endpoint: "tempo-simplest-distributor:4317" tls: insecure: true service: pipelines: traces: receivers: [otlp] exporters: [otlp, spanmetrics] 5 metrics: receivers: [spanmetrics] 6 exporters: [prometheus] 1 Creates the ServiceMonitor custom resource to enable scraping of the Prometheus exporter. 2 The Spanmetrics connector receives traces and exports metrics. 3 The OTLP receiver to receive spans in the OpenTelemetry protocol. 4 The Prometheus exporter is used to export metrics in the Prometheus format. 5 The Spanmetrics connector is configured as exporter in traces pipeline. 6 The Spanmetrics connector is configured as receiver in metrics pipeline. 4.2.1.4.2. Tempo configuration The TempoStack custom resource must specify the following: the Monitor tab is enabled, and the Prometheus endpoint is set to the Thanos querier service to query the data from the user-defined monitoring stack. TempoStack custom resource with the enabled Monitor tab kind: TempoStack apiVersion: tempo.grafana.com/v1alpha1 metadata: name: simplest spec: template: queryFrontend: jaegerQuery: enabled: true monitorTab: enabled: true 1 prometheusEndpoint: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 2 ingress: type: route 1 Enables the monitoring tab in the Jaeger console. 2 The service name for Thanos Querier from user-workload monitoring. 4.2.1.5. Multitenancy Multitenancy with authentication and authorization is provided in the Tempo Gateway service. The authentication uses OpenShift OAuth and the Kubernetes TokenReview API. The authorization uses the Kubernetes SubjectAccessReview API. Sample Tempo CR with two tenants, dev and prod apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest spec: tenants: mode: openshift 1 authentication: 2 - tenantName: dev 3 tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa" 4 - tenantName: prod tenantId: "1610b0c3-c509-4592-a256-a1871353dbfb" template: gateway: enabled: true 5 queryFrontend: jaegerQuery: enabled: true 1 Must be set to openshift . 2 The list of tenants. 3 The tenant name. Must be provided in the X-Scope-OrgId header when ingesting the data. 4 A unique tenant ID. 5 Enables a gateway that performs authentication and authorization. The Jaeger UI is exposed at http://<gateway-ingress>/api/traces/v1/<tenant-name>/search . The authorization configuration uses the ClusterRole and ClusterRoleBinding of the Kubernetes Role-Based Access Control (RBAC). By default, no users have read or write permissions. Sample of the read RBAC configuration that allows authenticated users to read the trace data of the dev and prod tenants apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-reader rules: - apiGroups: - 'tempo.grafana.com' resources: 1 - dev - prod resourceNames: - traces verbs: - 'get' 2 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-reader subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: system:authenticated 3 1 Lists the tenants. 2 The get value enables the read operation. 3 Grants all authenticated users the read permissions for trace data. Sample of the write RBAC configuration that allows the otel-collector service account to write the trace data for the dev tenant apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector 1 namespace: otel --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-write rules: - apiGroups: - 'tempo.grafana.com' resources: 2 - dev resourceNames: - traces verbs: - 'create' 3 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-write subjects: - kind: ServiceAccount name: otel-collector namespace: otel 1 The service account name for the client to use when exporting trace data. The client must send the service account token, /var/run/secrets/kubernetes.io/serviceaccount/token , as the bearer token header. 2 Lists the tenants. 3 The create value enables the write operation. Trace data can be sent to the Tempo instance from the OpenTelemetry Collector that uses the service account with RBAC for writing the data. Sample OpenTelemetry CR configuration apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment serviceAccount: otel-collector config: | extensions: bearertokenauth: filename: "/var/run/secrets/kubernetes.io/serviceaccount/token" exporters: otlp/dev: endpoint: tempo-simplest-gateway.tempo.svc.cluster.local:8090 tls: insecure: false ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: "dev" service: extensions: [bearertokenauth] pipelines: traces: exporters: [otlp/dev] 4.2.2. Setting up monitoring for the distributed tracing platform (Tempo) The Tempo Operator supports monitoring and alerting of each TempoStack component such as distributor, ingester, and so on, and exposes upgrade and operational metrics about the Operator itself. 4.2.2.1. Configuring TempoStack metrics and alerts You can enable metrics and alerts of TempoStack instances. Prerequisites Monitoring for user-defined projects is enabled in the cluster. See Enabling monitoring for user-defined projects . Procedure To enable metrics of a TempoStack instance, set the spec.observability.metrics.createServiceMonitors field to true : apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createServiceMonitors: true To enable alerts for a TempoStack instance, set the spec.observability.metrics.createPrometheusRules field to true : apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createPrometheusRules: true Verification You can use the Administrator view of the web console to verify successful configuration: Go to Observe Targets , filter for Source: User , and check that ServiceMonitors in the format tempo-<instance_name>-<component> have the Up status. To verify that alerts are set up correctly, go to Observe Alerting Alerting rules , filter for Source: User , and check that the Alert rules for the TempoStack instance components are available. 4.2.2.2. Configuring Tempo Operator metrics and alerts When installing the Tempo Operator from the web console, you can select the Enable Operator recommended cluster monitoring on this Namespace checkbox, which enables creating metrics and alerts of the Tempo Operator. If the checkbox was not selected during installation, you can manually enable metrics and alerts even after installing the Tempo Operator. Procedure Add the openshift.io/cluster-monitoring: "true" label in the project where the Tempo Operator is installed, which is openshift-tempo-operator by default. Verification You can use the Administrator view of the web console to verify successful configuration: Go to Observe Targets , filter for Source: Platform , and search for tempo-operator , which must have the Up status. To verify that alerts are set up correctly, go to Observe Alerting Alerting rules , filter for Source: Platform , and locate the Alert rules for the Tempo Operator . 4.3. Updating the distributed tracing platform (Tempo) For version upgrades, the Tempo Operator uses the Operator Lifecycle Manager (OLM), which controls installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. The OLM runs in the OpenShift Container Platform by default. The OLM queries for available Operators as well as upgrades for installed Operators. When the Tempo Operator is upgraded to the new version, it scans for running TempoStack instances that it manages and upgrades them to the version corresponding to the Operator's new version. 4.3.1. Additional resources Operator Lifecycle Manager concepts and resources Updating installed Operators 4.4. Removing the Red Hat OpenShift distributed tracing platform (Tempo) The steps for removing the Red Hat OpenShift distributed tracing platform (Tempo) from an OpenShift Container Platform cluster are as follows: Shut down all distributed tracing platform (Tempo) pods. Remove any TempoStack instances. Remove the Tempo Operator. 4.4.1. Removing a TempoStack instance by using the web console You can remove a TempoStack instance in the Administrator view of the web console. Prerequisites You are logged in to the OpenShift Container Platform web console as a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role. Procedure Go to Operators Installed Operators Tempo Operator TempoStack . To remove the TempoStack instance, select Delete TempoStack Delete . Optional: Remove the Tempo Operator. 4.4.2. Removing a TempoStack instance by using the CLI You can remove a TempoStack instance on the command line. Prerequisites An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. Tip Ensure that your OpenShift CLI ( oc ) version is up to date and matches your OpenShift Container Platform version. Run oc login : USD oc login --username=<your_username> Procedure Get the name of the TempoStack instance by running the following command: USD oc get deployments -n <project_of_tempostack_instance> Remove the TempoStack instance by running the following command: USD oc delete tempo <tempostack_instance_name> -n <project_of_tempostack_instance> Optional: Remove the Tempo Operator. Verification Run the following command to verify that the TempoStack instance is not found in the output, which indicates its successful removal: USD oc get deployments -n <project_of_tempostack_instance> 4.4.3. Additional resources Deleting Operators from a cluster Getting started with the OpenShift CLI | [
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: secret: name: <secret-name> 1 type: <secret-provider> 2 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route",
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-tempo-operator openshift.io/cluster-monitoring: \"true\" name: openshift-tempo-operator EOF",
"oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-tempo-operator namespace: openshift-tempo-operator spec: upgradeStrategy: Default EOF",
"oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: tempo-product namespace: openshift-tempo-operator spec: channel: stable installPlanApproval: Automatic name: tempo-product source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc get csv -n openshift-tempo-operator",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempostack_instance> EOF",
"oc apply -f - << EOF <object_storage_secret> EOF",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: secret: name: <secret-name> 1 type: <secret-provider> 2 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: project_of_tempostack_instance spec: storageSize: 1Gi storage: secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route",
"oc apply -f - << EOF <TempoStack_custom_resource> EOF",
"oc get tempostacks.tempo.grafana.com simplest -o yaml",
"oc get pods",
"export TEMPO_URL=USD(oc get route -n <control_plane_namespace> tempo -o jsonpath='{.spec.host}')",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: name spec: storage: {} resources: {} storageSize: 200M replicationFactor: 1 retention: {} template: distributor:{} ingester: {} compactor: {} querier: {} queryFrontend: {} gateway: {}",
"apiVersion:",
"kind:",
"metadata:",
"name:",
"spec:",
"resources:",
"storageSize:",
"replicationFactor:",
"retention:",
"storage:",
"template.distributor:",
"template.ingester:",
"template.compactor:",
"template.querier:",
"template.queryFrontend:",
"template.gateway:",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest spec: storage: 1 secret: name: minio type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: enabled: true ingress: type: route",
"spec: storage: secret type:",
"storage: secretname:",
"storage: tls: caName:",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest spec: storage: secret: name: minio type: s3 storageSize: 200M resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route",
"kind: OpenTelemetryCollector apiVersion: opentelemetry.io/v1alpha1 metadata: name: otel spec: mode: deployment observability: metrics: enableMetrics: true 1 config: | connectors: spanmetrics: 2 metrics_flush_interval: 15s receivers: otlp: 3 protocols: grpc: http: exporters: prometheus: 4 endpoint: 0.0.0.0:8889 add_metric_suffixes: false resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped otlp: endpoint: \"tempo-simplest-distributor:4317\" tls: insecure: true service: pipelines: traces: receivers: [otlp] exporters: [otlp, spanmetrics] 5 metrics: receivers: [spanmetrics] 6 exporters: [prometheus]",
"kind: TempoStack apiVersion: tempo.grafana.com/v1alpha1 metadata: name: simplest spec: template: queryFrontend: jaegerQuery: enabled: true monitorTab: enabled: true 1 prometheusEndpoint: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 2 ingress: type: route",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest spec: tenants: mode: openshift 1 authentication: 2 - tenantName: dev 3 tenantId: \"1610b0c3-c509-4592-a256-a1871353dbfa\" 4 - tenantName: prod tenantId: \"1610b0c3-c509-4592-a256-a1871353dbfb\" template: gateway: enabled: true 5 queryFrontend: jaegerQuery: enabled: true",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-reader rules: - apiGroups: - 'tempo.grafana.com' resources: 1 - dev - prod resourceNames: - traces verbs: - 'get' 2 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-reader subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: system:authenticated 3",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector 1 namespace: otel --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-write rules: - apiGroups: - 'tempo.grafana.com' resources: 2 - dev resourceNames: - traces verbs: - 'create' 3 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-write subjects: - kind: ServiceAccount name: otel-collector namespace: otel",
"apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment serviceAccount: otel-collector config: | extensions: bearertokenauth: filename: \"/var/run/secrets/kubernetes.io/serviceaccount/token\" exporters: otlp/dev: endpoint: tempo-simplest-gateway.tempo.svc.cluster.local:8090 tls: insecure: false ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: \"dev\" service: extensions: [bearertokenauth] pipelines: traces: exporters: [otlp/dev]",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createServiceMonitors: true",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createPrometheusRules: true",
"oc login --username=<your_username>",
"oc get deployments -n <project_of_tempostack_instance>",
"oc delete tempo <tempostack_instance_name> -n <project_of_tempostack_instance>",
"oc get deployments -n <project_of_tempostack_instance>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/distributed_tracing/distributed-tracing-platform-tempo |
Chapter 5. Overview of persistent naming attributes | Chapter 5. Overview of persistent naming attributes As a system administrator, you need to refer to storage volumes using persistent naming attributes to build storage setups that are reliable over multiple system boots. 5.1. Disadvantages of non-persistent naming attributes Red Hat Enterprise Linux provides a number of ways to identify storage devices. It is important to use the correct option to identify each device when used in order to avoid inadvertently accessing the wrong device, particularly when installing to or reformatting drives. Traditionally, non-persistent names in the form of /dev/sd (major number) (minor number) are used on Linux to refer to storage devices. The major and minor number range and associated sd names are allocated for each device when it is detected. This means that the association between the major and minor number range and associated sd names can change if the order of device detection changes. Such a change in the ordering might occur in the following situations: The parallelization of the system boot process detects storage devices in a different order with each system boot. A disk fails to power up or respond to the SCSI controller. This results in it not being detected by the normal device probe. The disk is not accessible to the system and subsequent devices will have their major and minor number range, including the associated sd names shifted down. For example, if a disk normally referred to as sdb is not detected, a disk that is normally referred to as sdc would instead appear as sdb . A SCSI controller (host bus adapter, or HBA) fails to initialize, causing all disks connected to that HBA to not be detected. Any disks connected to subsequently probed HBAs are assigned different major and minor number ranges, and different associated sd names. The order of driver initialization changes if different types of HBAs are present in the system. This causes the disks connected to those HBAs to be detected in a different order. This might also occur if HBAs are moved to different PCI slots on the system. Disks connected to the system with Fibre Channel, iSCSI, or FCoE adapters might be inaccessible at the time the storage devices are probed, due to a storage array or intervening switch being powered off, for example. This might occur when a system reboots after a power failure, if the storage array takes longer to come online than the system take to boot. Although some Fibre Channel drivers support a mechanism to specify a persistent SCSI target ID to WWPN mapping, this does not cause the major and minor number ranges, and the associated sd names to be reserved; it only provides consistent SCSI target ID numbers. These reasons make it undesirable to use the major and minor number range or the associated sd names when referring to devices, such as in the /etc/fstab file. There is the possibility that the wrong device will be mounted and data corruption might result. Occasionally, however, it is still necessary to refer to the sd names even when another mechanism is used, such as when errors are reported by a device. This is because the Linux kernel uses sd names (and also SCSI host/channel/target/LUN tuples) in kernel messages regarding the device. 5.2. File system and device identifiers File system identifiers are tied to the file system itself, while device identifiers are linked to the physical block device. Understanding the difference is important for proper storage management. File system identifiers File system identifiers are tied to a particular file system created on a block device. The identifier is also stored as part of the file system. If you copy the file system to a different device, it still carries the same file system identifier. However, if you rewrite the device, such as by formatting it with the mkfs utility, the device loses the attribute. File system identifiers include: Unique identifier (UUID) Label Device identifiers Device identifiers are tied to a block device: for example, a disk or a partition. If you rewrite the device, such as by formatting it with the mkfs utility, the device keeps the attribute, because it is not stored in the file system. Device identifiers include: World Wide Identifier (WWID) Partition UUID Serial number Recommendations Some file systems, such as logical volumes, span multiple devices. Red Hat recommends accessing these file systems using file system identifiers rather than device identifiers. 5.3. Device names managed by the udev mechanism in /dev/disk/ The udev mechanism is used for all types of devices in Linux, and is not limited only for storage devices. It provides different kinds of persistent naming attributes in the /dev/disk/ directory. In the case of storage devices, Red Hat Enterprise Linux contains udev rules that create symbolic links in the /dev/disk/ directory. This enables you to refer to storage devices by: Their content A unique identifier Their serial number. Although udev naming attributes are persistent, in that they do not change on their own across system reboots, some are also configurable. 5.3.1. File system identifiers The UUID attribute in /dev/disk/by-uuid/ Entries in this directory provide a symbolic name that refers to the storage device by a unique identifier (UUID) in the content (that is, the data) stored on the device. For example: You can use the UUID to refer to the device in the /etc/fstab file using the following syntax: You can configure the UUID attribute when creating a file system, and you can also change it later on. The Label attribute in /dev/disk/by-label/ Entries in this directory provide a symbolic name that refers to the storage device by a label in the content (that is, the data) stored on the device. For example: You can use the label to refer to the device in the /etc/fstab file using the following syntax: You can configure the Label attribute when creating a file system, and you can also change it later on. 5.3.2. Device identifiers The WWID attribute in /dev/disk/by-id/ The World Wide Identifier (WWID) is a persistent, system-independent identifier that the SCSI Standard requires from all SCSI devices. The WWID identifier is guaranteed to be unique for every storage device, and independent of the path that is used to access the device. The identifier is a property of the device but is not stored in the content (that is, the data) on the devices. This identifier can be obtained by issuing a SCSI Inquiry to retrieve the Device Identification Vital Product Data (page 0x83 ) or Unit Serial Number (page 0x80 ). Red Hat Enterprise Linux automatically maintains the proper mapping from the WWID-based device name to a current /dev/sd name on that system. Applications can use the /dev/disk/by-id/ name to reference the data on the disk, even if the path to the device changes, and even when accessing the device from different systems. Example 5.1. WWID mappings WWID symlink Non-persistent device Note /dev/disk/by-id/scsi-3600508b400105e210000900000490000 /dev/sda A device with a page 0x83 identifier /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 /dev/sdb A device with a page 0x80 identifier /dev/disk/by-id/ata-SAMSUNG_MZNLN256HMHQ-000L7_S2WDNX0J336519-part3 /dev/sdc3 A disk partition In addition to these persistent names provided by the system, you can also use udev rules to implement persistent names of your own, mapped to the WWID of the storage. The Partition UUID attribute in /dev/disk/by-partuuid The Partition UUID (PARTUUID) attribute identifies partitions as defined by GPT partition table. Example 5.2. Partition UUID mappings PARTUUID symlink Non-persistent device /dev/disk/by-partuuid/4cd1448a-01 /dev/sda1 /dev/disk/by-partuuid/4cd1448a-02 /dev/sda2 /dev/disk/by-partuuid/4cd1448a-03 /dev/sda3 The Path attribute in /dev/disk/by-path/ This attribute provides a symbolic name that refers to the storage device by the hardware path used to access the device. The Path attribute fails if any part of the hardware path (for example, the PCI ID, target port, or LUN number) changes. The Path attribute is therefore unreliable. However, the Path attribute may be useful in one of the following scenarios: You need to identify a disk that you are planning to replace later. You plan to install a storage service on a disk in a specific location. 5.4. The World Wide Identifier with DM Multipath You can configure Device Mapper (DM) Multipath to map between the World Wide Identifier (WWID) and non-persistent device names. If there are multiple paths from a system to a device, DM Multipath uses the WWID to detect this. DM Multipath then presents a single "pseudo-device" in the /dev/mapper/wwid directory, such as /dev/mapper/3600508b400105df70000e00000ac0000 . The command multipath -l shows the mapping to the non-persistent identifiers: Host : Channel : Target : LUN /dev/sd name major : minor number Example 5.3. WWID mappings in a multipath configuration An example output of the multipath -l command: DM Multipath automatically maintains the proper mapping of each WWID-based device name to its corresponding /dev/sd name on the system. These names are persistent across path changes, and they are consistent when accessing the device from different systems. When the user_friendly_names feature of DM Multipath is used, the WWID is mapped to a name of the form /dev/mapper/mpath N . By default, this mapping is maintained in the file /etc/multipath/bindings . These mpath N names are persistent as long as that file is maintained. Important If you use user_friendly_names , then additional steps are required to obtain consistent names in a cluster. 5.5. Limitations of the udev device naming convention The following are some limitations of the udev naming convention: It is possible that the device might not be accessible at the time the query is performed because the udev mechanism might rely on the ability to query the storage device when the udev rules are processed for a udev event. This is more likely to occur with Fibre Channel, iSCSI or FCoE storage devices when the device is not located in the server chassis. The kernel might send udev events at any time, causing the rules to be processed and possibly causing the /dev/disk/by-*/ links to be removed if the device is not accessible. There might be a delay between when the udev event is generated and when it is processed, such as when a large number of devices are detected and the user-space udevd service takes some amount of time to process the rules for each one. This might cause a delay between when the kernel detects the device and when the /dev/disk/by-*/ names are available. External programs such as blkid invoked by the rules might open the device for a brief period of time, making the device inaccessible for other uses. The device names managed by the udev mechanism in /dev/disk/ may change between major releases, requiring you to update the links. 5.6. Listing persistent naming attributes You can find out the persistent naming attributes of non-persistent storage devices. Procedure To list the UUID and Label attributes, use the lsblk utility: For example: Example 5.4. Viewing the UUID and Label of a file system To list the PARTUUID attribute, use the lsblk utility with the --output +PARTUUID option: For example: Example 5.5. Viewing the PARTUUID attribute of a partition To list the WWID attribute, examine the targets of symbolic links in the /dev/disk/by-id/ directory. For example: Example 5.6. Viewing the WWID of all storage devices on the system 5.7. Modifying persistent naming attributes You can change the UUID or Label persistent naming attribute of a file system. Note Changing udev attributes happens in the background and might take a long time. The udevadm settle command waits until the change is fully registered, which ensures that your command will be able to use the new attribute correctly. In the following commands: Replace new-uuid with the UUID you want to set; for example, 1cdfbc07-1c90-4984-b5ec-f61943f5ea50 . You can generate a UUID using the uuidgen command. Replace new-label with a label; for example, backup_data . Prerequisites If you are modifying the attributes of an XFS file system, unmount it first. Procedure To change the UUID or Label attributes of an XFS file system, use the xfs_admin utility: To change the UUID or Label attributes of an ext4 , ext3 , or ext2 file system, use the tune2fs utility: To change the UUID or Label attributes of a swap volume, use the swaplabel utility: | [
"/dev/disk/by-uuid/ 3e6be9de-8139-11d1-9106-a43f08d823a6",
"UUID= 3e6be9de-8139-11d1-9106-a43f08d823a6",
"/dev/disk/by-label/ Boot",
"LABEL= Boot",
"3600508b400105df70000e00000ac0000 dm-2 vendor,product [size=20G][features=1 queue_if_no_path][hwhandler=0][rw] \\_ round-robin 0 [prio=0][active] \\_ 5:0:1:1 sdc 8:32 [active][undef] \\_ 6:0:1:1 sdg 8:96 [active][undef] \\_ round-robin 0 [prio=0][enabled] \\_ 5:0:0:1 sdb 8:16 [active][undef] \\_ 6:0:0:1 sdf 8:80 [active][undef]",
"lsblk --fs storage-device",
"lsblk --fs /dev/sda1 NAME FSTYPE LABEL UUID MOUNTPOINT sda1 xfs Boot afa5d5e3-9050-48c3-acc1-bb30095f3dc4 /boot",
"lsblk --output +PARTUUID",
"lsblk --output +PARTUUID /dev/sda1 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT PARTUUID sda1 8:1 0 512M 0 part /boot 4cd1448a-01",
"file /dev/disk/by-id/ * /dev/disk/by-id/ata-QEMU_HARDDISK_QM00001 symbolic link to ../../sda /dev/disk/by-id/ata-QEMU_HARDDISK_QM00001-part1 symbolic link to ../../sda1 /dev/disk/by-id/ata-QEMU_HARDDISK_QM00001-part2 symbolic link to ../../sda2 /dev/disk/by-id/dm-name-rhel_rhel8-root symbolic link to ../../dm-0 /dev/disk/by-id/dm-name-rhel_rhel8-swap symbolic link to ../../dm-1 /dev/disk/by-id/dm-uuid-LVM-QIWtEHtXGobe5bewlIUDivKOz5ofkgFhP0RMFsNyySVihqEl2cWWbR7MjXJolD6g symbolic link to ../../dm-1 /dev/disk/by-id/dm-uuid-LVM-QIWtEHtXGobe5bewlIUDivKOz5ofkgFhXqH2M45hD2H9nAf2qfWSrlRLhzfMyOKd symbolic link to ../../dm-0 /dev/disk/by-id/lvm-pv-uuid-atlr2Y-vuMo-ueoH-CpMG-4JuH-AhEF-wu4QQm symbolic link to ../../sda2",
"xfs_admin -U new-uuid -L new-label storage-device udevadm settle",
"tune2fs -U new-uuid -L new-label storage-device udevadm settle",
"swaplabel --uuid new-uuid --label new-label swap-device udevadm settle"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_storage_devices/assembly_overview-of-persistent-naming-attributes_managing-storage-devices |
Red Hat Quay Operator features | Red Hat Quay Operator features Red Hat Quay 3.9 Advanced Red Hat Quay Operator features Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/red_hat_quay_operator_features/index |
Chapter 3. Using ingress control for a MicroShift cluster | Chapter 3. Using ingress control for a MicroShift cluster Use the ingress controller options in the MicroShift configuration file to make pods and services accessible outside the cluster. 3.1. Using ingress control in MicroShift When you create your MicroShift cluster, each pod and service running on the cluster is allocated an IP address. These IP addresses are accessible to other pods and services running nearby by default, but are not accessible to external clients. MicroShift uses a minimal implementation of the OpenShift Container Platform IngressController API to enable external access to cluster services. With more configuration options, you can fine-tune ingress to meet your specific needs. To use enhanced ingress control, update the parameters in the MicroShift configuration file and restart the service. Ingress configuration is useful in a variety of ways, for example: If your application starts processing requests from clients but the connection is closed before it can respond, you can set the ingress.tuningOptions.serverTimeout parameter in the configuration file to a higher value to accommodate the speed of the response from the server. If the router has many connections open because an application running on the cluster does not close connections properly, you can set the ingress.tuningOptions.serverTimeout and spec.tuningOptions.serverFinTimeout parameters to a lower value, forcing those connections to close sooner. 3.2. Configuring ingress control in MicroShift You can use detailed ingress control settings by updating the MicroShift service configuration file. Prerequisites You installed the OpenShift CLI ( oc ). You have root access to the cluster. Your cluster uses the OVN-Kubernetes network plugin. Procedure Apply ingress control settings in one of the two following ways: Update the MicroShift config.yaml configuration file by making a copy of the provided config.yaml.default file in the /etc/microshift/ directory, naming it config.yaml and keeping it in the source directory. After you create it, the config.yaml file takes precedence over built-in settings. The configuration file is read every time the MicroShift service starts. Use a configuration snippet to apply the ingress control settings you want. To do this, create a configuration snippet YAML file and put it in the /etc/microshift/config.d/ configuration directory. Configuration snippet YAMLs take precedence over both built-in settings and a config.yaml configuration file. See the Additional resources links for more information. Replace the default values in the network section of the MicroShift YAML with your valid values, or create a configuration snippet file with the sections you need. Ingress controller configuration fields with default values apiServer: # ... ingress: defaultHTTPVersion: 1 forwardedHeaderPolicy: Append httpCompression: mimeTypes: - "" httpEmptyRequestsPolicy: Respond # ... logEmptyRequests: Log # ... tuningOptions: clientFinTimeout: 1s clientTimeout: 30s headerBufferBytes: 0 headerBufferMaxRewriteBytes: 0 healthCheckInterval: 5s maxConnections: 0 serverFinTimeout: 1s serverTimeout: 30s threadCount: 4 tlsInspectDelay: 5s tunnelTimeout: 1h # ... Table 3.1. Ingress controller configuration fields definitions table Parameter Description defaultHTTPVersion Sets the HTTP version for the ingress controller. Default value is 1 for HTTP 1.1. forwardedHeaderPolicy Specifies when and how the ingress controller sets the Forwarded , X-Forwarded-For , X-Forwarded-Host , X-Forwarded-Port , X-Forwarded-Proto , and X-Forwarded-Proto-Version HTTP headers. The following values are valid: Append , preserves any existing headers by specifying that the ingress controller appends them. Replace , removes any existing headers by specifying that the ingress controller sets the headers. IfNone sets the headers set by specifying that the ingress controller sets the headers if they are not already set. Never , preserves any existing headers by specifying that the ingress controller never sets the headers. httpCompression Defines the policy for HTTP traffic compression. httpCompression.mimeTypes defines a list of or a single MIME type to which compression is applied. For example, text/css; charset=utf-8 , text/html , text/* , image/svg+xml , application/octet-stream , X-custom/customsub , using the format pattern, type/subtype; [;attribute=value] . The types are: application, image, message, multipart, text, video, or a custom type prefaced by X- . To see the full notation for MIME types and subtypes, see RFC1341 (IETF Datatracker documentation). httpEmptyRequestsPolicy Describes how HTTP connections are handled if the connection times out before a request is received. Allowed values for this field are Respond and Ignore . The default value is Respond . The following are valid values: Respond , causes the ingress controller to send an HTTP 400 or 408 response, logs the connection if access logging is enabled, and counts the connection in the appropriate metrics. Ignore , adds the http-ignore-probes parameter in the HAproxy configuration. If the field is set to Ignore , the ingress controller closes the connection without sending a response, then logs the connection or incrementing metrics. Usually, empty request connections come from load balancer health probes or web browser preconnects and can be safely ignored. However, network errors and port scans can also create these empty requests, so setting this field to Ignore can impede detecting or diagnosing problems and also impede the detection of intrusion attempts. logEmptyRequests Specifies connections for which no request is received and logged. Usually, these empty requests come from load balancer health probes or web browser speculative connections such as preconnects. Logging these types of empty requests can be undesirable. However, network errors and port scans can also create empty requests, so setting this field to Ignore can impede detecting or diagnosing problems and also impede the detection of intrusion attempts. The following are valid values: Log , which indicates that an event should be logged. Ignore , which sets the dontlognull option in the HAproxy configuration. tuningOptions Specifies options for tuning the performance of ingress controller pods. The tuningOptions.clientFinTimeout parameter specifies how long a connection is held open while waiting for the client response to the server closing the connection. The default timeout is 1s . The tuningOptions.clientTimeout parameter specifies how long a connection is held open while waiting for a client response. The default timeout is 30s . The tuningOptions.headerBufferBytes parameter specifies how much memory is reserved, in bytes, for Ingress Controller connection sessions. This value must be at least 16384 if HTTP/2 is enabled for the Ingress Controller. If not set, the default value is 32768 bytes. Important Setting this field not recommended because headerBufferMaxRewriteBytes parameter values that are too small can break the ingress controller. Conversely, values for headerBufferMaxRewriteBytes that are too large could cause the ingress controller to use significantly more memory than necessary. The tuningOptions.headerBufferMaxRewriteBytes parameter specifies how much memory should be reserved, in bytes, from headerBufferBytes for HTTP header rewriting and appending for Ingress Controller connection sessions. The minimum value for headerBufferMaxRewriteBytes is 4096 . The headerBufferBytes value must be greater than the headerBufferMaxRewriteBytes value for incoming HTTP requests. If not set, the default value is 8192 bytes. Important Setting this field is not recommended because headerBufferMaxRewriteBytes parameter values that are too small can break the ingress controller. Conversely, values for headerBufferMaxRewriteBytes that are too large could cause the ingress controller to use significantly more memory than necessary. The tuningOptions.healthCheckInterval parameter specifies how long the router waits between health checks. The default is 5s . The tuningOptions.serverFinTimeout parameter specifies how long a connection is held open while waiting for the server response to the client that is closing the connection. The default timeout is 1s . The tuningOptions.serverTimeout parameter specifies how long a connection is held open while waiting for a server response. The default timeout is 30s . The tuningOptions.threadCount parameter specifies the number of threads to create per HAProxy process. Creating more threads allows each ingress controller pod to handle more connections, at the cost of more system resources being used. HAProxy supports up to 64 threads. If this field is empty, default value is 4 threads. Important Setting this field is not recommended because increasing the number of HAProxy threads allows ingress controller pods to use more CPU time under load, and prevent other pods from receiving the CPU resources they need to perform. The tuningOptions.tlsInspectDelay parameter specifies how long the router can hold data to find a matching route. Setting this value too short can cause the router to fall back to the default certificate for edge-terminated, re-encrypted, or passthrough routes, even when using a better-matched certificate. The default inspect delay is 5s . The tuningOptions.tunnelTimeout parameter specifies how long a tunnel connection, including websockets, remains open while the tunnel is idle. The default timeout is 1h . Complete any other configurations you require, then start or restart MicroShift by running one the following commands: USD sudo systemctl start microshift USD sudo systemctl restart microshift Verification After making ingress configuration changes and restarting MicroShift, you can check the age of the router pod to ensure that changes have been applied. To check the status of the router pod, run the following command: USD oc get pods -n openshift-ingress Example output NAME READY STATUS RESTARTS AGE router-default-8649b5bf65-w29cn 1/1 Running 0 6m10s 3.3. Additional resources Using configuration snippets Enabling HTTP/2 Ingress connectivity (OpenShift Container Platform documentation) | [
"apiServer: ingress: defaultHTTPVersion: 1 forwardedHeaderPolicy: Append httpCompression: mimeTypes: - \"\" httpEmptyRequestsPolicy: Respond logEmptyRequests: Log tuningOptions: clientFinTimeout: 1s clientTimeout: 30s headerBufferBytes: 0 headerBufferMaxRewriteBytes: 0 healthCheckInterval: 5s maxConnections: 0 serverFinTimeout: 1s serverTimeout: 30s threadCount: 4 tlsInspectDelay: 5s tunnelTimeout: 1h",
"sudo systemctl start microshift",
"sudo systemctl restart microshift",
"oc get pods -n openshift-ingress",
"NAME READY STATUS RESTARTS AGE router-default-8649b5bf65-w29cn 1/1 Running 0 6m10s"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/configuring/microshift-ingress-controller |
Chapter 2. Understanding disconnected installation mirroring | Chapter 2. Understanding disconnected installation mirroring You can use a mirror registry for disconnected installations and to ensure that your clusters only use container images that satisfy your organization's controls on external content. Before you install a cluster on infrastructure that you provision in a disconnected environment, you must mirror the required container images into that environment. To mirror container images, you must have a registry for mirroring. 2.1. Mirroring images for a disconnected installation through the Agent-based Installer You can use one of the following procedures to mirror your OpenShift Container Platform image repository to your mirror registry: Mirroring images for a disconnected installation Mirroring images for a disconnected installation using the oc-mirror plugin 2.2. About mirroring the OpenShift Container Platform image repository for a disconnected registry To use mirror images for a disconnected installation with the Agent-based Installer, you must modify the install-config.yaml file. You can mirror the release image by using the output of either the oc adm release mirror or oc mirror command. This is dependent on which command you used to set up the mirror registry. The following example shows the output of the oc adm release mirror command. USD oc adm release mirror Example output To use the new mirrored repository to install, add the following section to the install-config.yaml: imageContentSources: mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: quay.io/openshift-release-dev/ocp-v4.0-art-dev mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: registry.ci.openshift.org/ocp/release The following example shows part of the imageContentSourcePolicy.yaml file generated by the oc-mirror plugin. The file can be found in the results directory, for example oc-mirror-workspace/results-1682697932/ . Example imageContentSourcePolicy.yaml file spec: repositoryDigestMirrors: - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release-images source: quay.io/openshift-release-dev/ocp-release 2.2.1. Configuring the Agent-based Installer to use mirrored images You must use the output of either the oc adm release mirror command or the oc-mirror plugin to configure the Agent-based Installer to use mirrored images. Procedure If you used the oc-mirror plugin to mirror your release images: Open the imageContentSourcePolicy.yaml located in the results directory, for example oc-mirror-workspace/results-1682697932/ . Copy the text in the repositoryDigestMirrors section of the yaml file. If you used the oc adm release mirror command to mirror your release images: Copy the text in the imageContentSources section of the command output. Paste the copied text into the imageContentSources field of the install-config.yaml file. Add the certificate file used for the mirror registry to the additionalTrustBundle field of the yaml file. Important The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Example install-config.yaml file additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- If you are using GitOps ZTP manifests: add the registries.conf and ca-bundle.crt files to the mirror path to add the mirror configuration in the agent ISO image. Note You can create the registries.conf file from the output of either the oc adm release mirror command or the oc mirror plugin. The format of the /etc/containers/registries.conf file has changed. It is now version 2 and in TOML format. Example registries.conf file [[registry]] location = "registry.ci.openshift.org/ocp/release" mirror-by-digest-only = true [[registry.mirror]] location = "virthost.ostest.test.metalkube.org:5000/localimages/local-release-image" [[registry]] location = "quay.io/openshift-release-dev/ocp-v4.0-art-dev" mirror-by-digest-only = true [[registry.mirror]] location = "virthost.ostest.test.metalkube.org:5000/localimages/local-release-image" | [
"oc adm release mirror",
"To use the new mirrored repository to install, add the following section to the install-config.yaml: imageContentSources: mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: quay.io/openshift-release-dev/ocp-v4.0-art-dev mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: registry.ci.openshift.org/ocp/release",
"spec: repositoryDigestMirrors: - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release-images source: quay.io/openshift-release-dev/ocp-release",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"[[registry]] location = \"registry.ci.openshift.org/ocp/release\" mirror-by-digest-only = true [[registry.mirror]] location = \"virthost.ostest.test.metalkube.org:5000/localimages/local-release-image\" [[registry]] location = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev\" mirror-by-digest-only = true [[registry.mirror]] location = \"virthost.ostest.test.metalkube.org:5000/localimages/local-release-image\""
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_an_on-premise_cluster_with_the_agent-based_installer/understanding-disconnected-installation-mirroring |
Chapter 7. Enable MicroProfile application development for JBoss EAP on Red Hat CodeReady Studio | Chapter 7. Enable MicroProfile application development for JBoss EAP on Red Hat CodeReady Studio If you want to incorporate MicroProfile capabilities in applications that you develop on CodeReady Studio, you must enable MicroProfile support for JBoss EAP in CodeReady Studio. JBoss EAP expansion packs provide support for MicroProfile. JBoss EAP expansion packs are not supported on JBoss EAP 7.2 and earlier. Each version of the JBoss EAP expansion pack supports specific patches of JBoss EAP. For details, see the JBoss EAP expansion pack Support and Life Cycle Policies page. Important The JBoss EAP XP Quickstarts for Openshift are provided as Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. 7.1. Configuring CodeReady Studio to use MicroProfile capabilities To enable MicroProfile support on JBoss EAP, register a new runtime server for JBoss EAP XP, and then create the new JBoss EAP 7.4 server. Give the server an appropriate name that helps you recognize that it supports MicroProfile capabilities. This server uses a newly created JBoss EAP XP runtime that points to the runtime installed previously and uses the standalone-microprofile.xml configuration file. Note If you set the Target runtime to 7.4 or a later runtime version in Red Hat CodeReady Studio, your project is compatible with the Jakarta EE 8 specification. Prerequisites JBoss EAP XP 3.0.0 has been installed . Procedure Set up the new server on the New Server dialog box. In the Select server type list, select Red Hat JBoss Enterprise Application Platform 7.4 . In the Server's host name field, enter localhost . In the Server name field, enter JBoss EAP 7.4 XP . Click . Configure the new server. In the Home directory field, if you do not want to use the default setting, specify a new directory; for example: home/myname/dev/microprofile/runtimes/jboss-eap-7.3 . Make sure the Execution Environment is set to JavaSE-1.8 . Optional: Change the values in the Server base directory and Configuration file fields. Click Finish . Result You are now ready to begin developing applications using MicroProfile capabilities, or to begin using the MicroProfile quickstarts for JBoss EAP. 7.2. Using MicroProfile quickstarts for CodeReady Studio Enabling the MicroProfile quickstarts makes the simple examples available to run and test on your installed server. These examples illustrate the following MicroProfile capabilities. MicroProfile Config MicroProfile Fault Tolerance MicroProfile Health MicroProfile JWT MicroProfile Metrics MicroProfile OpenAPI MicroProfile OpenTracing MicroProfile REST Client Procedure Import the pom.xml file from the Quickstart Parent Artifact. If the quickstart you are using requires environment variables, configure the environment variables. Define environment variables on the launch configuration on the server Overview dialog box. For example, the microprofile-opentracing quickstart uses the following environment variables: JAEGER_REPORTER_LOG_SPANS set to true JAEGER_SAMPLER_PARAM set to 1 JAEGER_SAMPLER_TYPE set to const Additional resources About Microprofile About JBoss Enterprise Application Platform expansion pack Red Hat JBoss Enterprise Application Platform expansion pack Support and Life Cycle Policies | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/using_jboss_eap_xp_3.0.0/assembly-enable-microprofile-application-development-jboss-eap-codeready-studio_default |
7.27. coolkey | 7.27. coolkey 7.27.1. RHBA-2013:0397 - coolkey bug fix and enhancement update Updated coolkey packages that fix several bugs and add an enhancement are now available for Red Hat Enterprise Linux 6. Coolkey is a smart card support library for the CoolKey, CAC (Common Access Card), and PIV (Personal Identity Verification) smart cards. Bug Fixes BZ#861108 Previously, Coolkey was unable to recognize PIV-I cards. This update fixes the bug and Coolkey now allows these cards to be read and display certificate information as expected. BZ# 879563 Prior to this update, The pkcs11_listcerts and pklogin_finder utilities were unable to recognize certificates and USB tokens on smart cards after upgrading the Coolkey library. A patch has been provided to address this issue and these utilities now work as expected. BZ# 806038 Previously, the remote-viewer utility failed to utilize a plugged smart card reader when a Spice client was running. Eventually, the client could terminate unexpectedly. Now, remote-viewer recognizes the reader and offers authentication once the card is inserted and the crashes no longer occur. BZ# 884266 Previously, certain new PIV-II smart cards could not be recognized by client card readers, the ESC card manager, or the pklogin_finder utility. A patch has been provided to address this issue and PIV-II cards now work with Coolkey as expected. Enhancement BZ#805693 Support for Oberthur Smart Cards has been added to the Coolkey library. Users of coolkey are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/coolkey |
Chapter 8. Migrating upstream Keycloak to Red Hat build of Keycloak 26.0 | Chapter 8. Migrating upstream Keycloak to Red Hat build of Keycloak 26.0 Starting with version 22, minimal differences exist between Red Hat build of Keycloak and upstream Keycloak. The following differences exist: For upstream Keycloak, the distribution artifacts are on keycloak.org ; for Red Hat build of Keycloak, the distribution artifacts are on the Red Hat customer portal . Oracle and MSSQL database drivers are bundled with upstream Keycloak, but not bundled with Red Hat build of Keycloak. See Configuring the database for detailed steps on how to install those drivers. The GELF log handler is not available in Red Hat build of Keycloak. The migration process depends on the version of Keycloak to be migrated and the type of Keycloak installation. See the following sections for details. 8.1. Matching Keycloak version The migration process depends on the version of Keycloak to be migrated. If your Keycloak project version matches the Red Hat build of Keycloak version, migrate Keycloak by using the Red Hat build of Keycloak artifacts on the Red Hat customer portal . If your Keycloak project version is an older version, use the Keycloak Upgrading Guide to upgrade Keycloak to match the Red Hat build of Keycloak version. Then, migrate Keycloak using the artifacts on the Red Hat customer portal . If your Keycloak project version is greater than the Red Hat build of Keycloak version, you cannot migrate to Red Hat build of Keycloak. Instead, create a new deployment of Red Hat build of Keycloak or wait for a future Red Hat build of Keycloak release. 8.2. Migration based on type of Keycloak installation Once you have a matching version of Keycloak, migrate Keycloak based on the type of installation. If you installed Keycloak from a ZIP distribution, migrate Keycloak by using the artifacts on the Red Hat customer portal . If you deployed the Keycloak Operator, uninstall it and install the Red Hat build of Keycloak Operator by using the Operator guide . The CRs are compatible between upstream Keycloak and Red Hat build of Keycloak. If you created a custom server container image, rebuild it by using the Red Hat build of Keycloak image. See Running Keycloak in a Container . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/migration_guide/migrating-keycloak |
Chapter 25. CXF | Chapter 25. CXF Both producer and consumer are supported The CXF component provides integration with Apache CXF for connecting to JAX-WS services hosted in CXF. Tip When using CXF in streaming modes (see DataFormat option), then also read about Stream caching. 25.1. Dependencies When using cxf with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cxf-soap-starter</artifactId> </dependency> 25.2. URI format There are two URI formats for this endpoint: cxfEndpoint and someAddress . Where cxfEndpoint represents a bean ID that references a bean in the Spring bean registry. With this URI format, most of the endpoint details are specified in the bean definition. Where someAddress specifies the CXF endpoint's address. With this URI format, most of the endpoint details are specified using options. For either style above, you can append options to the URI as follows: 25.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 25.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 25.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 25.4. Component Options The CXF component supports 6 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean allowStreaming (advanced) This option controls whether the CXF component, when running in PAYLOAD mode, will DOM parse the incoming messages into DOM Elements or keep the payload as a javax.xml.transform.Source object that would allow streaming in some cases. Boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean headerFilterStrategy (filter) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy useGlobalSslContextParameters (security) Enable usage of global SSL context parameters. false boolean 25.5. Endpoint Options The CXF endpoint is configured using URI syntax: with the following path and query parameters: 25.5.1. Path Parameters (2 parameters) Name Description Default Type beanId (common) To lookup an existing configured CxfEndpoint. Must used bean: as prefix. String address (service) The service publish address. String 25.5.2. Query Parameters (35 parameters) Name Description Default Type dataFormat (common) The data type messages supported by the CXF endpoint. Enum values: PAYLOAD RAW MESSAGE CXF_MESSAGE POJO POJO DataFormat wrappedStyle (common) The WSDL style that describes how parameters are represented in the SOAP body. If the value is false, CXF will chose the document-literal unwrapped style, If the value is true, CXF will chose the document-literal wrapped style. Boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern cookieHandler (producer) Configure a cookie handler to maintain a HTTP session. CookieHandler defaultOperationName (producer) This option will set the default operationName that will be used by the CxfProducer which invokes the remote service. String defaultOperationNamespace (producer) This option will set the default operationNamespace that will be used by the CxfProducer which invokes the remote service. String hostnameVerifier (producer) The hostname verifier to be used. Use the # notation to reference a HostnameVerifier from the registry. HostnameVerifier lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean sslContextParameters (producer) The Camel SSL setting reference. Use the # notation to reference the SSL Context. SSLContextParameters wrapped (producer) Which kind of operation that CXF endpoint producer will invoke. false boolean synchronous (producer (advanced)) Sets whether synchronous processing should be strictly used. false boolean allowStreaming (advanced) This option controls whether the CXF component, when running in PAYLOAD mode, will DOM parse the incoming messages into DOM Elements or keep the payload as a javax.xml.transform.Source object that would allow streaming in some cases. Boolean bus (advanced) To use a custom configured CXF Bus. Bus continuationTimeout (advanced) This option is used to set the CXF continuation timeout which could be used in CxfConsumer by default when the CXF server is using Jetty or Servlet transport. 30000 long cxfBinding (advanced) To use a custom CxfBinding to control the binding between Camel Message and CXF Message. CxfBinding cxfConfigurer (advanced) This option could apply the implementation of org.apache.camel.component.cxf.CxfEndpointConfigurer which supports to configure the CXF endpoint in programmatic way. User can configure the CXF server and client by implementing configure{ServerClient} method of CxfEndpointConfigurer. CxfConfigurer defaultBus (advanced) Will set the default bus when CXF endpoint create a bus by itself. false boolean headerFilterStrategy (advanced) To use a custom HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy mergeProtocolHeaders (advanced) Whether to merge protocol headers. If enabled then propagating headers between Camel and CXF becomes more consistent and similar. For more details see CAMEL-6393. false boolean mtomEnabled (advanced) To enable MTOM (attachments). This requires to use POJO or PAYLOAD data format mode. false boolean properties (advanced) To set additional CXF options using the key/value pairs from the Map. For example to turn on stacktraces in SOAP faults, properties.faultStackTraceEnabled=true. Map skipPayloadMessagePartCheck (advanced) Sets whether SOAP message validation should be disabled. false boolean loggingFeatureEnabled (logging) This option enables CXF Logging Feature which writes inbound and outbound SOAP messages to log. false boolean loggingSizeLimit (logging) To limit the total size of number of bytes the logger will output when logging feature has been enabled and -1 for no limit. 49152 int skipFaultLogging (logging) This option controls whether the PhaseInterceptorChain skips logging the Fault that it catches. false boolean password (security) This option is used to set the basic authentication information of password for the CXF client. String username (security) This option is used to set the basic authentication information of username for the CXF client. String bindingId (service) The bindingId for the service model to use. String portName (service) The endpoint name this service is implementing, it maps to the wsdl:portname. In the format of ns:PORT_NAME where ns is a namespace prefix valid at this scope. String publishedEndpointUrl (service) This option can override the endpointUrl that published from the WSDL which can be accessed with service address url plus wsd. String serviceClass (service) The class name of the SEI (Service Endpoint Interface) class which could have JSR181 annotation or not. Class serviceName (service) The service name this service is implementing, it maps to the wsdl:servicename. String wsdlURL (service) The location of the WSDL. Can be on the classpath, file system, or be hosted remotely. String The serviceName and portName are QNames , so if you provide them be sure to prefix them with their {namespace} as shown in the examples above. 25.5.3. Descriptions of the dataformats In Apache Camel, the Camel CXF component is the key to integrating routes with Web services. You can use the Camel CXF component to create a CXF endpoint, which can be used in either of the following ways: Consumer - (at the start of a route) represents a Web service instance, which integrates with the route. The type of payload injected into the route depends on the value of the endpoint's dataFormat option. Producer - (at other points in the route) represents a WS client proxy, which converts the current exchange object into an operation invocation on a remote Web service. The format of the current exchange must match the endpoint's dataFormat setting. DataFormat Description POJO POJOs (Plain old Java objects) are the Java parameters to the method being invoked on the target server. Both Protocol and Logical JAX-WS handlers are supported. PAYLOAD PAYLOAD is the message payload (the contents of the soap:body ) after message configuration in the CXF endpoint is applied. Only Protocol JAX-WS handler is supported. Logical JAX-WS handler is not supported. RAW RAW mode provides the raw message stream that is received from the transport layer. It is not possible to touch or change the stream, some of the CXF interceptors will be removed if you are using this kind of DataFormat, so you can't see any soap headers after the camel-cxf consumer. JAX-WS handler is not supported. CXF_MESSAGE CXF_MESSAGE allows for invoking the full capabilities of CXF interceptors by converting the message from the transport layer into a raw SOAP message You can determine the data format mode of an exchange by retrieving the exchange property, CamelCXFDataFormat . The exchange key constant is defined in org.apache.camel.component.cxf.common.message.CxfConstants.DATA_FORMAT_PROPERTY . 25.5.4. How to enable CXF's LoggingOutInterceptor in RAW mode CXF's LoggingOutInterceptor outputs outbound message that goes on the wire to logging system (Java Util Logging). Since the LoggingOutInterceptor is in PRE_STREAM phase (but PRE_STREAM phase is removed in RAW mode), you have to configure LoggingOutInterceptor to be run during the WRITE phase. The following is an example. @Bean public CxfEndpoint serviceEndpoint(LoggingOutInterceptor loggingOutInterceptor) { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setAddress("http://localhost:" + port + "/services" + SERVICE_ADDRESS); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.HelloService.class); Map<String, Object> properties = new HashMap<String, Object>(); properties.put("dataFormat", "RAW"); cxfEndpoint.setProperties(properties); cxfEndpoint.getOutInterceptors().add(loggingOutInterceptor); return cxfEndpoint; } @Bean public LoggingOutInterceptor loggingOutInterceptor() { LoggingOutInterceptor logger = new LoggingOutInterceptor("write"); return logger; } 25.5.5. Description of relayHeaders option There are in-band and out-of-band on-the-wire headers from the perspective of a JAXWS WSDL-first developer. The in-band headers are headers that are explicitly defined as part of the WSDL binding contract for an endpoint such as SOAP headers. The out-of-band headers are headers that are serialized over the wire, but are not explicitly part of the WSDL binding contract. Headers relaying/filtering is bi-directional. When a route has a CXF endpoint and the developer needs to have on-the-wire headers, such as SOAP headers, be relayed along the route to be consumed say by another JAXWS endpoint, then relayHeaders should be set to true , which is the default value. 25.5.6. Available only in POJO mode The relayHeaders=true expresses an intent to relay the headers. The actual decision on whether a given header is relayed is delegated to a pluggable instance that implements the MessageHeadersRelay interface. A concrete implementation of MessageHeadersRelay will be consulted to decide if a header needs to be relayed or not. There is already an implementation of SoapMessageHeadersRelay which binds itself to well-known SOAP name spaces. Currently only out-of-band headers are filtered, and in-band headers will always be relayed when relayHeaders=true . If there is a header on the wire whose name space is unknown to the runtime, then a fall back DefaultMessageHeadersRelay will be used, which simply allows all headers to be relayed. The relayHeaders=false setting specifies that all headers in-band and out-of-band should be dropped. You can plugin your own MessageHeadersRelay implementations overriding or adding additional ones to the list of relays. In order to override a preloaded relay instance just make sure that your MessageHeadersRelay implementation services the same name spaces as the one you looking to override. Also note, that the overriding relay has to service all of the name spaces as the one you looking to override, or else a runtime exception on route start up will be thrown as this would introduce an ambiguity in name spaces to relay instance mappings. <cxf:cxfEndpoint ...> <cxf:properties> <entry key="org.apache.camel.cxf.message.headers.relays"> <list> <ref bean="customHeadersRelay"/> </list> </entry> </cxf:properties> </cxf:cxfEndpoint> <bean id="customHeadersRelay" class="org.apache.camel.component.cxf.soap.headers.CustomHeadersRelay"/> Take a look at the tests that show how you'd be able to relay/drop headers here: https://github.com/apache/camel/blob/main/components/camel-cxf/camel-cxf-spring-soap/src/test/java/org/apache/camel/component/cxf/soap/headers/CxfMessageHeadersRelayTest.java POJO and PAYLOAD modes are supported. In POJO mode, only out-of-band message headers are available for filtering as the in-band headers have been processed and removed from header list by CXF. The in-band headers are incorporated into the MessageContentList in POJO mode. The camel-cxf component does make any attempt to remove the in-band headers from the MessageContentList . If filtering of in-band headers is required, please use PAYLOAD mode or plug in a (pretty straightforward) CXF interceptor/JAXWS Handler to the CXF endpoint. The Message Header Relay mechanism has been merged into CxfHeaderFilterStrategy . The relayHeaders option, its semantics, and default value remain the same, but it is a property of CxfHeaderFilterStrategy . Here is an example of configuring it. @Bean public HeaderFilterStrategy dropAllMessageHeadersStrategy() { CxfHeaderFilterStrategy headerFilterStrategy = new CxfHeaderFilterStrategy(); headerFilterStrategy.setRelayHeaders(false); return headerFilterStrategy; } Then, your endpoint can reference the CxfHeaderFilterStrategy . @Bean public CxfEndpoint routerNoRelayEndpoint(HeaderFilterStrategy dropAllMessageHeadersStrategy) { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress("/CxfMessageHeadersRelayTest/HeaderService/routerNoRelayEndpoint"); cxfEndpoint.setWsdlURL("soap_header.wsdl"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf("{http://apache.org/camel/component/cxf/soap/headers}SoapPortNoRelay")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); Map<String, Object> properties = new HashMap<String, Object>(); properties.put("dataFormat", "PAYLOAD"); cxfEndpoint.setProperties(properties); cxfEndpoint.setHeaderFilterStrategy(dropAllMessageHeadersStrategy); return cxfEndpoint; } @Bean public CxfEndpoint serviceNoRelayEndpoint(HeaderFilterStrategy dropAllMessageHeadersStrategy) { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress("http://localhost:" + port + "/services/CxfMessageHeadersRelayTest/HeaderService/routerNoRelayEndpointBackend"); cxfEndpoint.setWsdlURL("soap_header.wsdl"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf("{http://apache.org/camel/component/cxf/soap/headers}SoapPortNoRelay")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); Map<String, Object> properties = new HashMap<String, Object>(); properties.put("dataFormat", "PAYLOAD"); cxfEndpoint.setProperties(properties); cxfEndpoint.setHeaderFilterStrategy(dropAllMessageHeadersStrategy); return cxfEndpoint; } Then configure the route as follows: rom("cxf:bean:routerNoRelayEndpoint") .to("cxf:bean:serviceNoRelayEndpoint"); The MessageHeadersRelay interface has changed slightly and has been renamed to MessageHeaderFilter . It is a property of CxfHeaderFilterStrategy . Here is an example of configuring user defined Message Header Filters: @Bean public HeaderFilterStrategy customMessageFilterStrategy() { CxfHeaderFilterStrategy headerFilterStrategy = new CxfHeaderFilterStrategy(); List<MessageHeaderFilter> headerFilterList = new ArrayList<MessageHeaderFilter>(); headerFilterList.add(new SoapMessageHeaderFilter()); headerFilterList.add(new CustomHeaderFilter()); headerFilterStrategy.setMessageHeaderFilters(headerFilterList); return headerFilterStrategy; } In addition to relayHeaders , the following properties can be configured in CxfHeaderFilterStrategy . Name Required Description relayHeaders No All message headers will be processed by Message Header Filters Type : boolean Default : true relayAllMessageHeaders No All message headers will be propagated (without processing by Message Header Filters) Type : boolean Default : false allowFilterNamespaceClash No If two filters overlap in activation namespace, the property control how it should be handled. If the value is true , last one wins. If the value is false , it will throw an exception Type : boolean Default : false 25.6. Configure the CXF endpoints with Spring You can configure the CXF endpoint with the Spring configuration file shown below, and you can also embed the endpoint into the camelContext tags. When you are invoking the service endpoint, you can set the operationName and operationNamespace headers to explicitly state which operation you are calling. <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:cxf="http://camel.apache.org/schema/cxf/jaxws" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/cxf/jaxws http://camel.apache.org/schema/cxf/jaxws/camel-cxf.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <cxf:cxfEndpoint id="routerEndpoint" address="http://localhost:9003/CamelContext/RouterPort" serviceClass="org.apache.hello_world_soap_http.GreeterImpl"/> <cxf:cxfEndpoint id="serviceEndpoint" address="http://localhost:9000/SoapContext/SoapPort" wsdlURL="testutils/hello_world.wsdl" serviceClass="org.apache.hello_world_soap_http.Greeter" endpointName="s:SoapPort" serviceName="s:SOAPService" xmlns:s="http://apache.org/hello_world_soap_http" /> <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="cxf:bean:routerEndpoint" /> <to uri="cxf:bean:serviceEndpoint" /> </route> </camelContext> </beans> Be sure to include the JAX-WS schemaLocation attribute specified on the root beans element. This allows CXF to validate the file and is required. Also note the namespace declarations at the end of the <cxf:cxfEndpoint/> tag. These declarations are required because the combined {namespace}localName syntax is presently not supported for this tag's attribute values. The cxf:cxfEndpoint element supports many additional attributes: Name Value PortName The endpoint name this service is implementing, it maps to the wsdl:port@name . In the format of ns:PORT_NAME where ns is a namespace prefix valid at this scope. serviceName The service name this service is implementing, it maps to the wsdl:service@name . In the format of ns:SERVICE_NAME where ns is a namespace prefix valid at this scope. wsdlURL The location of the WSDL. Can be on the classpath, file system, or be hosted remotely. bindingId The bindingId for the service model to use. address The service publish address. bus The bus name that will be used in the JAX-WS endpoint. serviceClass The class name of the SEI (Service Endpoint Interface) class which could have JSR181 annotation or not. It also supports many child elements: Name Value cxf:inInterceptors The incoming interceptors for this endpoint. A list of <bean> or <ref> . cxf:inFaultInterceptors The incoming fault interceptors for this endpoint. A list of <bean> or <ref> . cxf:outInterceptors The outgoing interceptors for this endpoint. A list of <bean> or <ref> . cxf:outFaultInterceptors The outgoing fault interceptors for this endpoint. A list of <bean> or <ref> . cxf:properties A properties map which should be supplied to the JAX-WS endpoint. See below. cxf:handlers A JAX-WS handler list which should be supplied to the JAX-WS endpoint. See below. cxf:dataBinding You can specify the which DataBinding will be use in the endpoint. This can be supplied using the Spring <bean class="MyDataBinding"/> syntax. cxf:binding You can specify the BindingFactory for this endpoint to use. This can be supplied using the Spring <bean class="MyBindingFactory"/> syntax. cxf:features The features that hold the interceptors for this endpoint. A list of beans or refs cxf:schemaLocations The schema locations for endpoint to use. A list of schemaLocations cxf:serviceFactory The service factory for this endpoint to use. This can be supplied using the Spring <bean class="MyServiceFactory"/> syntax You can find more advanced examples that show how to provide interceptors, properties and handlers on the CXF JAX-WS Configuration page . Note You can use cxf:properties to set the camel-cxf endpoint's dataFormat and setDefaultBus properties from spring configuration file. <cxf:cxfEndpoint id="testEndpoint" address="http://localhost:9000/router" serviceClass="org.apache.camel.component.cxf.HelloService" endpointName="s:PortName" serviceName="s:ServiceName" xmlns:s="http://www.example.com/test"> <cxf:properties> <entry key="dataFormat" value="RAW"/> <entry key="setDefaultBus" value="true"/> </cxf:properties> </cxf:cxfEndpoint> Note In SpringBoot, you can use Spring XML files to configure camel-cxf and use code similar to the following example to create XML configured beans: @ImportResource({ "classpath:spring-configuration.xml" }) However, the use of Java code configured beans (as shown in other examples) is best practice in SpringBoot. 25.7. How to make the camel-cxf component use log4j instead of java.util.logging CXF's default logger is java.util.logging . If you want to change it to log4j, proceed as follows. Create a file, in the classpath, named META-INF/cxf/org.apache.cxf.logger . This file should contain the fully-qualified name of the class, org.apache.cxf.common.logging.Log4jLogger , with no comments, on a single line. 25.8. How to let camel-cxf response start with xml processing instruction If you are using some SOAP client such as PHP, you will get this kind of error, because CXF doesn't add the XML processing instruction <?xml version="1.0" encoding="utf-8"?> : To resolve this issue, you just need to tell StaxOutInterceptor to write the XML start document for you, as in the WriteXmlDeclarationInterceptor below: public class WriteXmlDeclarationInterceptor extends AbstractPhaseInterceptor<SoapMessage> { public WriteXmlDeclarationInterceptor() { super(Phase.PRE_STREAM); addBefore(StaxOutInterceptor.class.getName()); } public void handleMessage(SoapMessage message) throws Fault { message.put("org.apache.cxf.stax.force-start-document", Boolean.TRUE); } } As an alternative you can add a message header for it as demonstrated in CxfConsumerTest : // set up the response context which force start document Map<String, Object> map = new HashMap<String, Object>(); map.put("org.apache.cxf.stax.force-start-document", Boolean.TRUE); exchange.getOut().setHeader(Client.RESPONSE_CONTEXT, map); 25.9. How to override the CXF producer address from message header The camel-cxf producer supports to override the target service address by setting a message header CamelDestinationOverrideUrl . // set up the service address from the message header to override the setting of CXF endpoint exchange.getIn().setHeader(Exchange.DESTINATION_OVERRIDE_URL, constant(getServiceAddress())); 25.10. How to consume a message from a camel-cxf endpoint in POJO data format The camel-cxf endpoint consumer POJO data format is based on the CXF invoker , so the message header has a property with the name of CxfConstants.OPERATION_NAME and the message body is a list of the SEI method parameters. Consider the PersonProcessor example code: public class PersonProcessor implements Processor { private static final Logger LOG = LoggerFactory.getLogger(PersonProcessor.class); @Override @SuppressWarnings("unchecked") public void process(Exchange exchange) throws Exception { LOG.info("processing exchange in camel"); BindingOperationInfo boi = (BindingOperationInfo) exchange.getProperty(BindingOperationInfo.class.getName()); if (boi != null) { LOG.info("boi.isUnwrapped" + boi.isUnwrapped()); } // Get the parameters list which element is the holder. MessageContentsList msgList = (MessageContentsList) exchange.getIn().getBody(); Holder<String> personId = (Holder<String>) msgList.get(0); Holder<String> ssn = (Holder<String>) msgList.get(1); Holder<String> name = (Holder<String>) msgList.get(2); if (personId.value == null || personId.value.length() == 0) { LOG.info("person id 123, so throwing exception"); // Try to throw out the soap fault message org.apache.camel.wsdl_first.types.UnknownPersonFault personFault = new org.apache.camel.wsdl_first.types.UnknownPersonFault(); personFault.setPersonId(""); org.apache.camel.wsdl_first.UnknownPersonFault fault = new org.apache.camel.wsdl_first.UnknownPersonFault("Get the null value of person name", personFault); exchange.getMessage().setBody(fault); return; } name.value = "Bonjour"; ssn.value = "123"; LOG.info("setting Bonjour as the response"); // Set the response message, first element is the return value of the operation, // the others are the holders of method parameters exchange.getMessage().setBody(new Object[] { null, personId, ssn, name }); } } 25.11. How to prepare the message for the camel-cxf endpoint in POJO data format The camel-cxf endpoint producer is based on the CXF client API . First you need to specify the operation name in the message header, then add the method parameters to a list, and initialize the message with this parameter list. The response message's body is a messageContentsList, you can get the result from that list. If you don't specify the operation name in the message header, CxfProducer will try to use the defaultOperationName from CxfEndpoint , if there is no defaultOperationName set on CxfEndpoint , it will pick up the first operationName from the Operation list. If you want to get the object array from the message body, you can get the body using message.getBody(Object[].class) , as shown in CxfProducerRouterTest.testInvokingSimpleServerWithParams : Exchange senderExchange = new DefaultExchange(context, ExchangePattern.InOut); final List<String> params = new ArrayList<>(); // Prepare the request message for the camel-cxf procedure params.add(TEST_MESSAGE); senderExchange.getIn().setBody(params); senderExchange.getIn().setHeader(CxfConstants.OPERATION_NAME, ECHO_OPERATION); Exchange exchange = template.send("direct:EndpointA", senderExchange); org.apache.camel.Message out = exchange.getMessage(); // The response message's body is an MessageContentsList which first element is the return value of the operation, // If there are some holder parameters, the holder parameter will be filled in the reset of List. // The result will be extract from the MessageContentsList with the String class type MessageContentsList result = (MessageContentsList) out.getBody(); LOG.info("Received output text: " + result.get(0)); Map<String, Object> responseContext = CastUtils.cast((Map<?, ?>) out.getHeader(Client.RESPONSE_CONTEXT)); assertNotNull(responseContext); assertEquals("UTF-8", responseContext.get(org.apache.cxf.message.Message.ENCODING), "We should get the response context here"); assertEquals("echo " + TEST_MESSAGE, result.get(0), "Reply body on Camel is wrong"); 25.12. How to deal with the message for a camel-cxf endpoint in PAYLOAD data format PAYLOAD means that you process the payload from the SOAP envelope as a native CxfPayload. Message.getBody() will return a org.apache.camel.component.cxf.CxfPayload object, with getters for SOAP message headers and the SOAP body. See CxfConsumerPayloadTest : protected RouteBuilder createRouteBuilder() { return new RouteBuilder() { public void configure() { from(simpleEndpointURI + "&dataFormat=PAYLOAD").to("log:info").process(new Processor() { @SuppressWarnings("unchecked") public void process(final Exchange exchange) throws Exception { CxfPayload<SoapHeader> requestPayload = exchange.getIn().getBody(CxfPayload.class); List<Source> inElements = requestPayload.getBodySources(); List<Source> outElements = new ArrayList<>(); // You can use a customer toStringConverter to turn a CxfPayLoad message into String as you want String request = exchange.getIn().getBody(String.class); XmlConverter converter = new XmlConverter(); String documentString = ECHO_RESPONSE; Element in = new XmlConverter().toDOMElement(inElements.get(0)); // Just check the element namespace if (!in.getNamespaceURI().equals(ELEMENT_NAMESPACE)) { throw new IllegalArgumentException("Wrong element namespace"); } if (in.getLocalName().equals("echoBoolean")) { documentString = ECHO_BOOLEAN_RESPONSE; checkRequest("ECHO_BOOLEAN_REQUEST", request); } else { documentString = ECHO_RESPONSE; checkRequest("ECHO_REQUEST", request); } Document outDocument = converter.toDOMDocument(documentString, exchange); outElements.add(new DOMSource(outDocument.getDocumentElement())); // set the payload header with null CxfPayload<SoapHeader> responsePayload = new CxfPayload<>(null, outElements, null); exchange.getMessage().setBody(responsePayload); } }); } }; } 25.13. How to get and set SOAP headers in POJO mode POJO means that the data format is a "list of Java objects" when the camel-cxf endpoint produces or consumes Camel exchanges. Even though Camel exposes the message body as POJOs in this mode, camel-cxf still provides access to read and write SOAP headers. However, since CXF interceptors remove in-band SOAP headers from the header list after they have been processed, only out-of-band SOAP headers are available to camel-cxf in POJO mode. The following example illustrates how to get/set SOAP headers. Suppose we have a route that forwards from one Camel-cxf endpoint to another. That is, SOAP Client Camel CXF service. We can attach two processors to obtain/insert SOAP headers at (1) before a request goes out to the CXF service and (2) before the response comes back to the SOAP Client. Processor (1) and (2) in this example are InsertRequestOutHeaderProcessor and InsertResponseOutHeaderProcessor. Our route looks like this: from("cxf:bean:routerRelayEndpointWithInsertion") .process(new InsertRequestOutHeaderProcessor()) .to("cxf:bean:serviceRelayEndpointWithInsertion") .process(new InsertResponseOutHeaderProcessor()); The Bean routerRelayEndpointWithInsertion and serviceRelayEndpointWithInsertion are defined as follows: @Bean public CxfEndpoint routerRelayEndpointWithInsertion() { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress("/CxfMessageHeadersRelayTest/HeaderService/routerRelayEndpointWithInsertion"); cxfEndpoint.setWsdlURL("soap_header.wsdl"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf("{http://apache.org/camel/component/cxf/soap/headers}SoapPortRelayWithInsertion")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); cxfEndpoint.getFeatures().add(new LoggingFeature()); return cxfEndpoint; } @Bean public CxfEndpoint serviceRelayEndpointWithInsertion() { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress("http://localhost:" + port + "/services/CxfMessageHeadersRelayTest/HeaderService/routerRelayEndpointWithInsertionBackend"); cxfEndpoint.setWsdlURL("soap_header.wsdl"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf("{http://apache.org/camel/component/cxf/soap/headers}SoapPortRelayWithInsertion")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); cxfEndpoint.getFeatures().add(new LoggingFeature()); return cxfEndpoint; } SOAP headers are propagated to and from Camel Message headers. The Camel message header name is "org.apache.cxf.headers.Header.list" which is a constant defined in CXF (org.apache.cxf.headers.Header.HEADER_LIST). The header value is a List of CXF SoapHeader objects (org.apache.cxf.binding.soap.SoapHeader). The following snippet is the InsertResponseOutHeaderProcessor (that insert a new SOAP header in the response message). The way to access SOAP headers in both InsertResponseOutHeaderProcessor and InsertRequestOutHeaderProcessor are actually the same. The only difference between the two processors is setting the direction of the inserted SOAP header. You can find the InsertResponseOutHeaderProcessor example in CxfMessageHeadersRelayTest : public static class InsertResponseOutHeaderProcessor implements Processor { public void process(Exchange exchange) throws Exception { List<SoapHeader> soapHeaders = CastUtils.cast((List<?>)exchange.getIn().getHeader(Header.HEADER_LIST)); // Insert a new header String xml = "<?xml version=\"1.0\" encoding=\"utf-8\"?><outofbandHeader " + "xmlns=\"http://cxf.apache.org/outofband/Header\" hdrAttribute=\"testHdrAttribute\" " + "xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\" soap:mustUnderstand=\"1\">" + "<name>New_testOobHeader</name><value>New_testOobHeaderValue</value></outofbandHeader>"; SoapHeader newHeader = new SoapHeader(soapHeaders.get(0).getName(), DOMUtils.readXml(new StringReader(xml)).getDocumentElement()); // make sure direction is OUT since it is a response message. newHeader.setDirection(Direction.DIRECTION_OUT); //newHeader.setMustUnderstand(false); soapHeaders.add(newHeader); } } 25.14. How to get and set SOAP headers in PAYLOAD mode We've already shown how to access the SOAP message as CxfPayload object in PAYLOAD mode inm the section How to deal with the message for a camel-cxf endpoint in PAYLOAD data format . Once you obtain a CxfPayload object, you can invoke the CxfPayload.getHeaders() method that returns a List of DOM Elements (SOAP headers). For an example see CxfPayLoadSoapHeaderTest : from(getRouterEndpointURI()).process(new Processor() { @SuppressWarnings("unchecked") public void process(Exchange exchange) throws Exception { CxfPayload<SoapHeader> payload = exchange.getIn().getBody(CxfPayload.class); List<Source> elements = payload.getBodySources(); assertNotNull(elements, "We should get the elements here"); assertEquals(1, elements.size(), "Get the wrong elements size"); Element el = new XmlConverter().toDOMElement(elements.get(0)); elements.set(0, new DOMSource(el)); assertEquals("http://camel.apache.org/pizza/types", el.getNamespaceURI(), "Get the wrong namespace URI"); List<SoapHeader> headers = payload.getHeaders(); assertNotNull(headers, "We should get the headers here"); assertEquals(1, headers.size(), "Get the wrong headers size"); assertEquals("http://camel.apache.org/pizza/types", ((Element) (headers.get(0).getObject())).getNamespaceURI(), "Get the wrong namespace URI"); // alternatively you can also get the SOAP header via the camel header: headers = exchange.getIn().getHeader(Header.HEADER_LIST, List.class); assertNotNull(headers, "We should get the headers here"); assertEquals(1, headers.size(), "Get the wrong headers size"); assertEquals("http://camel.apache.org/pizza/types", ((Element) (headers.get(0).getObject())).getNamespaceURI(), "Get the wrong namespace URI"); } }) .to(getServiceEndpointURI()); You can also use the same way as described in sub-chapter "How to get and set SOAP headers in POJO mode" to set or get the SOAP headers. So, you can use the header "org.apache.cxf.headers.Header.list" to get and set a list of SOAP headers.This does also mean that if you have a route that forwards from one Camel-cxf endpoint to another (SOAP Client Camel CXF service), now also the SOAP headers sent by the SOAP client are forwarded to the CXF service. If you do not want that these headers are forwarded you have to remove them in the Camel header "org.apache.cxf.headers.Header.list". 25.15. SOAP headers are not available in RAW mode SOAP headers are not available in RAW mode as SOAP processing is skipped. 25.16. How to throw a SOAP Fault from Camel If you are using a camel-cxf endpoint to consume the SOAP request, you may need to throw the SOAP Fault from the camel context. Basically, you can use the throwFault DSL to do that; it works for POJO , PAYLOAD and MESSAGE data format. You can define the soap fault as shown in CxfCustomizedExceptionTest : SOAP_FAULT = new SoapFault(EXCEPTION_MESSAGE, SoapFault.FAULT_CODE_CLIENT); Element detail = SOAP_FAULT.getOrCreateDetail(); Document doc = detail.getOwnerDocument(); Text tn = doc.createTextNode(DETAIL_TEXT); detail.appendChild(tn); Then throw it as you like from(routerEndpointURI).setFaultBody(constant(SOAP_FAULT)); If your CXF endpoint is working in the MESSAGE data format, you could set the SOAP Fault message in the message body and set the response code in the message header as demonstrated by CxfMessageStreamExceptionTest from(routerEndpointURI).process(new Processor() { public void process(Exchange exchange) throws Exception { Message out = exchange.getOut(); // Set the message body with the out.setBody(this.getClass().getResourceAsStream("SoapFaultMessage.xml")); // Set the response code here out.setHeader(org.apache.cxf.message.Message.RESPONSE_CODE, new Integer(500)); } }); Same for using POJO data format. You can set the SOAPFault on the out body. 25.17. How to propagate a camel-cxf endpoint's request and response context CXF client API provides a way to invoke the operation with request and response context. If you are using a camel-cxf endpoint producer to invoke the outside web service, you can set the request context and get response context with the following code: CxfExchange exchange = (CxfExchange)template.send(getJaxwsEndpointUri(), new Processor() { public void process(final Exchange exchange) { final List<String> params = new ArrayList<String>(); params.add(TEST_MESSAGE); // Set the request context to the inMessage Map<String, Object> requestContext = new HashMap<String, Object>(); requestContext.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, JAXWS_SERVER_ADDRESS); exchange.getIn().setBody(params); exchange.getIn().setHeader(Client.REQUEST_CONTEXT , requestContext); exchange.getIn().setHeader(CxfConstants.OPERATION_NAME, GREET_ME_OPERATION); } }); org.apache.camel.Message out = exchange.getOut(); // The output is an object array, the first element of the array is the return value Object\[\] output = out.getBody(Object\[\].class); LOG.info("Received output text: " + output\[0\]); // Get the response context form outMessage Map<String, Object> responseContext = CastUtils.cast((Map)out.getHeader(Client.RESPONSE_CONTEXT)); assertNotNull(responseContext); assertEquals("Get the wrong wsdl operation name", "{http://apache.org/hello_world_soap_http}greetMe", responseContext.get("javax.xml.ws.wsdl.operation").toString()); 25.18. Attachment Support POJO Mode: Both SOAP with Attachment and MTOM are supported (see example in Payload Mode for enabling MTOM). However, SOAP with Attachment is not tested. Since attachments are marshalled and unmarshalled into POJOs, users typically do not need to deal with the attachment themself. Attachments are propagated to Camel message's attachments if the MTOM is not enabled. So, it is possible to retrieve attachments by Camel Message API DataHandler Message.getAttachment(String id) Payload Mode: MTOM is supported by the component. Attachments can be retrieved by Camel Message APIs mentioned above. SOAP with Attachment (SwA) is supported and attachments can be retrieved. SwA is the default (same as setting the CXF endpoint property "mtom-enabled" to false). To enable MTOM, set the CXF endpoint property "mtom-enabled" to true . @Bean public CxfEndpoint routerEndpoint() { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceNameAsQName(SERVICE_QNAME); cxfEndpoint.setEndpointNameAsQName(PORT_QNAME); cxfEndpoint.setAddress("/" + getClass().getSimpleName()+ "/jaxws-mtom/hello"); cxfEndpoint.setWsdlURL("mtom.wsdl"); Map<String, Object> properties = new HashMap<String, Object>(); properties.put("dataFormat", "PAYLOAD"); properties.put("mtom-enabled", true); cxfEndpoint.setProperties(properties); return cxfEndpoint; } You can produce a Camel message with attachment to send to a CXF endpoint in Payload mode. Exchange exchange = context.createProducerTemplate().send("direct:testEndpoint", new Processor() { public void process(Exchange exchange) throws Exception { exchange.setPattern(ExchangePattern.InOut); List<Source> elements = new ArrayList<Source>(); elements.add(new DOMSource(DOMUtils.readXml(new StringReader(MtomTestHelper.REQ_MESSAGE)).getDocumentElement())); CxfPayload<SoapHeader> body = new CxfPayload<SoapHeader>(new ArrayList<SoapHeader>(), elements, null); exchange.getIn().setBody(body); exchange.getIn().addAttachment(MtomTestHelper.REQ_PHOTO_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.REQ_PHOTO_DATA, "application/octet-stream"))); exchange.getIn().addAttachment(MtomTestHelper.REQ_IMAGE_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.requestJpeg, "image/jpeg"))); } }); // process response CxfPayload<SoapHeader> out = exchange.getOut().getBody(CxfPayload.class); Assert.assertEquals(1, out.getBody().size()); Map<String, String> ns = new HashMap<String, String>(); ns.put("ns", MtomTestHelper.SERVICE_TYPES_NS); ns.put("xop", MtomTestHelper.XOP_NS); XPathUtils xu = new XPathUtils(ns); Element oute = new XmlConverter().toDOMElement(out.getBody().get(0)); Element ele = (Element)xu.getValue("//ns:DetailResponse/ns:photo/xop:Include", oute, XPathConstants.NODE); String photoId = ele.getAttribute("href").substring(4); // skip "cid:" ele = (Element)xu.getValue("//ns:DetailResponse/ns:image/xop:Include", oute, XPathConstants.NODE); String imageId = ele.getAttribute("href").substring(4); // skip "cid:" DataHandler dr = exchange.getOut().getAttachment(photoId); Assert.assertEquals("application/octet-stream", dr.getContentType()); MtomTestHelper.assertEquals(MtomTestHelper.RESP_PHOTO_DATA, IOUtils.readBytesFromStream(dr.getInputStream())); dr = exchange.getOut().getAttachment(imageId); Assert.assertEquals("image/jpeg", dr.getContentType()); BufferedImage image = ImageIO.read(dr.getInputStream()); Assert.assertEquals(560, image.getWidth()); Assert.assertEquals(300, image.getHeight()); You can also consume a Camel message received from a CXF endpoint in Payload mode. The CxfMtomConsumerPayloadModeTest illustrates how this works: public static class MyProcessor implements Processor { @SuppressWarnings("unchecked") public void process(Exchange exchange) throws Exception { CxfPayload<SoapHeader> in = exchange.getIn().getBody(CxfPayload.class); // verify request Assert.assertEquals(1, in.getBody().size()); Map<String, String> ns = new HashMap<String, String>(); ns.put("ns", MtomTestHelper.SERVICE_TYPES_NS); ns.put("xop", MtomTestHelper.XOP_NS); XPathUtils xu = new XPathUtils(ns); Element body = new XmlConverter().toDOMElement(in.getBody().get(0)); Element ele = (Element)xu.getValue("//ns:Detail/ns:photo/xop:Include", body, XPathConstants.NODE); String photoId = ele.getAttribute("href").substring(4); // skip "cid:" Assert.assertEquals(MtomTestHelper.REQ_PHOTO_CID, photoId); ele = (Element)xu.getValue("//ns:Detail/ns:image/xop:Include", body, XPathConstants.NODE); String imageId = ele.getAttribute("href").substring(4); // skip "cid:" Assert.assertEquals(MtomTestHelper.REQ_IMAGE_CID, imageId); DataHandler dr = exchange.getIn().getAttachment(photoId); Assert.assertEquals("application/octet-stream", dr.getContentType()); MtomTestHelper.assertEquals(MtomTestHelper.REQ_PHOTO_DATA, IOUtils.readBytesFromStream(dr.getInputStream())); dr = exchange.getIn().getAttachment(imageId); Assert.assertEquals("image/jpeg", dr.getContentType()); MtomTestHelper.assertEquals(MtomTestHelper.requestJpeg, IOUtils.readBytesFromStream(dr.getInputStream())); // create response List<Source> elements = new ArrayList<Source>(); elements.add(new DOMSource(DOMUtils.readXml(new StringReader(MtomTestHelper.RESP_MESSAGE)).getDocumentElement())); CxfPayload<SoapHeader> sbody = new CxfPayload<SoapHeader>(new ArrayList<SoapHeader>(), elements, null); exchange.getOut().setBody(sbody); exchange.getOut().addAttachment(MtomTestHelper.RESP_PHOTO_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.RESP_PHOTO_DATA, "application/octet-stream"))); exchange.getOut().addAttachment(MtomTestHelper.RESP_IMAGE_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.responseJpeg, "image/jpeg"))); } } Raw Mode: Attachments are not supported as it does not process the message at all. CXF_RAW Mode : MTOM is supported, and Attachments can be retrieved by Camel Message APIs mentioned above. Note that when receiving a multipart (i.e. MTOM) message the default SOAPMessage to String converter will provide the complete multipart payload on the body. If you require just the SOAP XML as a String, you can set the message body with message.getSOAPPart(), and Camel convert can do the rest of work for you. 25.19. Streaming Support in PAYLOAD mode The camel-cxf component now supports streaming of incoming messages when using PAYLOAD mode. Previously, the incoming messages would have been completely DOM parsed. For large messages, this is time consuming and uses a significant amount of memory. The incoming messages can remain as a javax.xml.transform.Source while being routed and, if nothing modifies the payload, can then be directly streamed out to the target destination. For common "simple proxy" use cases (example: from("cxf:... ").to("cxf:... ")), this can provide very significant performance increases as well as significantly lowered memory requirements. However, there are cases where streaming may not be appropriate or desired. Due to the streaming nature, invalid incoming XML may not be caught until later in the processing chain. Also, certain actions may require the message to be DOM parsed anyway (like WS-Security or message tracing and such) in which case the advantages of the streaming is limited. At this point, there are two ways to control the streaming: Endpoint property: you can add "allowStreaming=false" as an endpoint property to turn the streaming on/off. Component property: the CxfComponent object also has an allowStreaming property that can set the default for endpoints created from that component. Global system property: you can add a system property of "org.apache.camel.component.cxf.streaming" to "false" to turn it off. That sets the global default, but setting the endpoint property above will override this value for that endpoint. 25.20. Using the generic CXF Dispatch mode The camel-cxf component supports the generic CXF dispatch mode that can transport messages of arbitrary structures (i.e., not bound to a specific XML schema). To use this mode, you simply omit specifying the wsdlURL and serviceClass attributes of the CXF endpoint. <cxf:cxfEndpoint id="testEndpoint" address="http://localhost:9000/SoapContext/SoapAnyPort"> <cxf:properties> <entry key="dataFormat" value="PAYLOAD"/> </cxf:properties> </cxf:cxfEndpoint> It is noted that the default CXF dispatch client does not send a specific SOAPAction header. Therefore, when the target service requires a specific SOAPAction value, it is supplied in the Camel header using the key SOAPAction (case-insensitive). 25.21. Spring Boot Auto-Configuration The component supports 13 options, which are listed below. Name Description Default Type camel.component.cxf.allow-streaming This option controls whether the CXF component, when running in PAYLOAD mode, will DOM parse the incoming messages into DOM Elements or keep the payload as a javax.xml.transform.Source object that would allow streaming in some cases. Boolean camel.component.cxf.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.cxf.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.cxf.enabled Whether to enable auto configuration of the cxf component. This is enabled by default. Boolean camel.component.cxf.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. HeaderFilterStrategy camel.component.cxf.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.cxf.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean camel.component.cxfrs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.cxfrs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.cxfrs.enabled Whether to enable auto configuration of the cxfrs component. This is enabled by default. Boolean camel.component.cxfrs.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. HeaderFilterStrategy camel.component.cxfrs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.cxfrs.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cxf-soap-starter</artifactId> </dependency>",
"cxf:bean:cxfEndpoint[?options]",
"cxf://someAddress[?options]",
"cxf:bean:cxfEndpoint?wsdlURL=wsdl/hello_world.wsdl&dataFormat=PAYLOAD",
"cxf:beanId:address",
"@Bean public CxfEndpoint serviceEndpoint(LoggingOutInterceptor loggingOutInterceptor) { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setAddress(\"http://localhost:\" + port + \"/services\" + SERVICE_ADDRESS); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.HelloService.class); Map<String, Object> properties = new HashMap<String, Object>(); properties.put(\"dataFormat\", \"RAW\"); cxfEndpoint.setProperties(properties); cxfEndpoint.getOutInterceptors().add(loggingOutInterceptor); return cxfEndpoint; } @Bean public LoggingOutInterceptor loggingOutInterceptor() { LoggingOutInterceptor logger = new LoggingOutInterceptor(\"write\"); return logger; }",
"<cxf:cxfEndpoint ...> <cxf:properties> <entry key=\"org.apache.camel.cxf.message.headers.relays\"> <list> <ref bean=\"customHeadersRelay\"/> </list> </entry> </cxf:properties> </cxf:cxfEndpoint> <bean id=\"customHeadersRelay\" class=\"org.apache.camel.component.cxf.soap.headers.CustomHeadersRelay\"/>",
"@Bean public HeaderFilterStrategy dropAllMessageHeadersStrategy() { CxfHeaderFilterStrategy headerFilterStrategy = new CxfHeaderFilterStrategy(); headerFilterStrategy.setRelayHeaders(false); return headerFilterStrategy; }",
"@Bean public CxfEndpoint routerNoRelayEndpoint(HeaderFilterStrategy dropAllMessageHeadersStrategy) { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress(\"/CxfMessageHeadersRelayTest/HeaderService/routerNoRelayEndpoint\"); cxfEndpoint.setWsdlURL(\"soap_header.wsdl\"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf(\"{http://apache.org/camel/component/cxf/soap/headers}SoapPortNoRelay\")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); Map<String, Object> properties = new HashMap<String, Object>(); properties.put(\"dataFormat\", \"PAYLOAD\"); cxfEndpoint.setProperties(properties); cxfEndpoint.setHeaderFilterStrategy(dropAllMessageHeadersStrategy); return cxfEndpoint; } @Bean public CxfEndpoint serviceNoRelayEndpoint(HeaderFilterStrategy dropAllMessageHeadersStrategy) { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress(\"http://localhost:\" + port + \"/services/CxfMessageHeadersRelayTest/HeaderService/routerNoRelayEndpointBackend\"); cxfEndpoint.setWsdlURL(\"soap_header.wsdl\"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf(\"{http://apache.org/camel/component/cxf/soap/headers}SoapPortNoRelay\")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); Map<String, Object> properties = new HashMap<String, Object>(); properties.put(\"dataFormat\", \"PAYLOAD\"); cxfEndpoint.setProperties(properties); cxfEndpoint.setHeaderFilterStrategy(dropAllMessageHeadersStrategy); return cxfEndpoint; }",
"rom(\"cxf:bean:routerNoRelayEndpoint\") .to(\"cxf:bean:serviceNoRelayEndpoint\");",
"@Bean public HeaderFilterStrategy customMessageFilterStrategy() { CxfHeaderFilterStrategy headerFilterStrategy = new CxfHeaderFilterStrategy(); List<MessageHeaderFilter> headerFilterList = new ArrayList<MessageHeaderFilter>(); headerFilterList.add(new SoapMessageHeaderFilter()); headerFilterList.add(new CustomHeaderFilter()); headerFilterStrategy.setMessageHeaderFilters(headerFilterList); return headerFilterStrategy; }",
"<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:cxf=\"http://camel.apache.org/schema/cxf/jaxws\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/cxf/jaxws http://camel.apache.org/schema/cxf/jaxws/camel-cxf.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <cxf:cxfEndpoint id=\"routerEndpoint\" address=\"http://localhost:9003/CamelContext/RouterPort\" serviceClass=\"org.apache.hello_world_soap_http.GreeterImpl\"/> <cxf:cxfEndpoint id=\"serviceEndpoint\" address=\"http://localhost:9000/SoapContext/SoapPort\" wsdlURL=\"testutils/hello_world.wsdl\" serviceClass=\"org.apache.hello_world_soap_http.Greeter\" endpointName=\"s:SoapPort\" serviceName=\"s:SOAPService\" xmlns:s=\"http://apache.org/hello_world_soap_http\" /> <camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"cxf:bean:routerEndpoint\" /> <to uri=\"cxf:bean:serviceEndpoint\" /> </route> </camelContext> </beans>",
"<cxf:cxfEndpoint id=\"testEndpoint\" address=\"http://localhost:9000/router\" serviceClass=\"org.apache.camel.component.cxf.HelloService\" endpointName=\"s:PortName\" serviceName=\"s:ServiceName\" xmlns:s=\"http://www.example.com/test\"> <cxf:properties> <entry key=\"dataFormat\" value=\"RAW\"/> <entry key=\"setDefaultBus\" value=\"true\"/> </cxf:properties> </cxf:cxfEndpoint>",
"@ImportResource({ \"classpath:spring-configuration.xml\" })",
"Error:sendSms: SoapFault exception: [Client] looks like we got no XML document in [...]",
"public class WriteXmlDeclarationInterceptor extends AbstractPhaseInterceptor<SoapMessage> { public WriteXmlDeclarationInterceptor() { super(Phase.PRE_STREAM); addBefore(StaxOutInterceptor.class.getName()); } public void handleMessage(SoapMessage message) throws Fault { message.put(\"org.apache.cxf.stax.force-start-document\", Boolean.TRUE); } }",
"// set up the response context which force start document Map<String, Object> map = new HashMap<String, Object>(); map.put(\"org.apache.cxf.stax.force-start-document\", Boolean.TRUE); exchange.getOut().setHeader(Client.RESPONSE_CONTEXT, map);",
"// set up the service address from the message header to override the setting of CXF endpoint exchange.getIn().setHeader(Exchange.DESTINATION_OVERRIDE_URL, constant(getServiceAddress()));",
"public class PersonProcessor implements Processor { private static final Logger LOG = LoggerFactory.getLogger(PersonProcessor.class); @Override @SuppressWarnings(\"unchecked\") public void process(Exchange exchange) throws Exception { LOG.info(\"processing exchange in camel\"); BindingOperationInfo boi = (BindingOperationInfo) exchange.getProperty(BindingOperationInfo.class.getName()); if (boi != null) { LOG.info(\"boi.isUnwrapped\" + boi.isUnwrapped()); } // Get the parameters list which element is the holder. MessageContentsList msgList = (MessageContentsList) exchange.getIn().getBody(); Holder<String> personId = (Holder<String>) msgList.get(0); Holder<String> ssn = (Holder<String>) msgList.get(1); Holder<String> name = (Holder<String>) msgList.get(2); if (personId.value == null || personId.value.length() == 0) { LOG.info(\"person id 123, so throwing exception\"); // Try to throw out the soap fault message org.apache.camel.wsdl_first.types.UnknownPersonFault personFault = new org.apache.camel.wsdl_first.types.UnknownPersonFault(); personFault.setPersonId(\"\"); org.apache.camel.wsdl_first.UnknownPersonFault fault = new org.apache.camel.wsdl_first.UnknownPersonFault(\"Get the null value of person name\", personFault); exchange.getMessage().setBody(fault); return; } name.value = \"Bonjour\"; ssn.value = \"123\"; LOG.info(\"setting Bonjour as the response\"); // Set the response message, first element is the return value of the operation, // the others are the holders of method parameters exchange.getMessage().setBody(new Object[] { null, personId, ssn, name }); } }",
"Exchange senderExchange = new DefaultExchange(context, ExchangePattern.InOut); final List<String> params = new ArrayList<>(); // Prepare the request message for the camel-cxf procedure params.add(TEST_MESSAGE); senderExchange.getIn().setBody(params); senderExchange.getIn().setHeader(CxfConstants.OPERATION_NAME, ECHO_OPERATION); Exchange exchange = template.send(\"direct:EndpointA\", senderExchange); org.apache.camel.Message out = exchange.getMessage(); // The response message's body is an MessageContentsList which first element is the return value of the operation, // If there are some holder parameters, the holder parameter will be filled in the reset of List. // The result will be extract from the MessageContentsList with the String class type MessageContentsList result = (MessageContentsList) out.getBody(); LOG.info(\"Received output text: \" + result.get(0)); Map<String, Object> responseContext = CastUtils.cast((Map<?, ?>) out.getHeader(Client.RESPONSE_CONTEXT)); assertNotNull(responseContext); assertEquals(\"UTF-8\", responseContext.get(org.apache.cxf.message.Message.ENCODING), \"We should get the response context here\"); assertEquals(\"echo \" + TEST_MESSAGE, result.get(0), \"Reply body on Camel is wrong\");",
"protected RouteBuilder createRouteBuilder() { return new RouteBuilder() { public void configure() { from(simpleEndpointURI + \"&dataFormat=PAYLOAD\").to(\"log:info\").process(new Processor() { @SuppressWarnings(\"unchecked\") public void process(final Exchange exchange) throws Exception { CxfPayload<SoapHeader> requestPayload = exchange.getIn().getBody(CxfPayload.class); List<Source> inElements = requestPayload.getBodySources(); List<Source> outElements = new ArrayList<>(); // You can use a customer toStringConverter to turn a CxfPayLoad message into String as you want String request = exchange.getIn().getBody(String.class); XmlConverter converter = new XmlConverter(); String documentString = ECHO_RESPONSE; Element in = new XmlConverter().toDOMElement(inElements.get(0)); // Just check the element namespace if (!in.getNamespaceURI().equals(ELEMENT_NAMESPACE)) { throw new IllegalArgumentException(\"Wrong element namespace\"); } if (in.getLocalName().equals(\"echoBoolean\")) { documentString = ECHO_BOOLEAN_RESPONSE; checkRequest(\"ECHO_BOOLEAN_REQUEST\", request); } else { documentString = ECHO_RESPONSE; checkRequest(\"ECHO_REQUEST\", request); } Document outDocument = converter.toDOMDocument(documentString, exchange); outElements.add(new DOMSource(outDocument.getDocumentElement())); // set the payload header with null CxfPayload<SoapHeader> responsePayload = new CxfPayload<>(null, outElements, null); exchange.getMessage().setBody(responsePayload); } }); } }; }",
"from(\"cxf:bean:routerRelayEndpointWithInsertion\") .process(new InsertRequestOutHeaderProcessor()) .to(\"cxf:bean:serviceRelayEndpointWithInsertion\") .process(new InsertResponseOutHeaderProcessor());",
"@Bean public CxfEndpoint routerRelayEndpointWithInsertion() { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress(\"/CxfMessageHeadersRelayTest/HeaderService/routerRelayEndpointWithInsertion\"); cxfEndpoint.setWsdlURL(\"soap_header.wsdl\"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf(\"{http://apache.org/camel/component/cxf/soap/headers}SoapPortRelayWithInsertion\")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); cxfEndpoint.getFeatures().add(new LoggingFeature()); return cxfEndpoint; } @Bean public CxfEndpoint serviceRelayEndpointWithInsertion() { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress(\"http://localhost:\" + port + \"/services/CxfMessageHeadersRelayTest/HeaderService/routerRelayEndpointWithInsertionBackend\"); cxfEndpoint.setWsdlURL(\"soap_header.wsdl\"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf(\"{http://apache.org/camel/component/cxf/soap/headers}SoapPortRelayWithInsertion\")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); cxfEndpoint.getFeatures().add(new LoggingFeature()); return cxfEndpoint; }",
"public static class InsertResponseOutHeaderProcessor implements Processor { public void process(Exchange exchange) throws Exception { List<SoapHeader> soapHeaders = CastUtils.cast((List<?>)exchange.getIn().getHeader(Header.HEADER_LIST)); // Insert a new header String xml = \"<?xml version=\\\"1.0\\\" encoding=\\\"utf-8\\\"?><outofbandHeader \" + \"xmlns=\\\"http://cxf.apache.org/outofband/Header\\\" hdrAttribute=\\\"testHdrAttribute\\\" \" + \"xmlns:soap=\\\"http://schemas.xmlsoap.org/soap/envelope/\\\" soap:mustUnderstand=\\\"1\\\">\" + \"<name>New_testOobHeader</name><value>New_testOobHeaderValue</value></outofbandHeader>\"; SoapHeader newHeader = new SoapHeader(soapHeaders.get(0).getName(), DOMUtils.readXml(new StringReader(xml)).getDocumentElement()); // make sure direction is OUT since it is a response message. newHeader.setDirection(Direction.DIRECTION_OUT); //newHeader.setMustUnderstand(false); soapHeaders.add(newHeader); } }",
"from(getRouterEndpointURI()).process(new Processor() { @SuppressWarnings(\"unchecked\") public void process(Exchange exchange) throws Exception { CxfPayload<SoapHeader> payload = exchange.getIn().getBody(CxfPayload.class); List<Source> elements = payload.getBodySources(); assertNotNull(elements, \"We should get the elements here\"); assertEquals(1, elements.size(), \"Get the wrong elements size\"); Element el = new XmlConverter().toDOMElement(elements.get(0)); elements.set(0, new DOMSource(el)); assertEquals(\"http://camel.apache.org/pizza/types\", el.getNamespaceURI(), \"Get the wrong namespace URI\"); List<SoapHeader> headers = payload.getHeaders(); assertNotNull(headers, \"We should get the headers here\"); assertEquals(1, headers.size(), \"Get the wrong headers size\"); assertEquals(\"http://camel.apache.org/pizza/types\", ((Element) (headers.get(0).getObject())).getNamespaceURI(), \"Get the wrong namespace URI\"); // alternatively you can also get the SOAP header via the camel header: headers = exchange.getIn().getHeader(Header.HEADER_LIST, List.class); assertNotNull(headers, \"We should get the headers here\"); assertEquals(1, headers.size(), \"Get the wrong headers size\"); assertEquals(\"http://camel.apache.org/pizza/types\", ((Element) (headers.get(0).getObject())).getNamespaceURI(), \"Get the wrong namespace URI\"); } }) .to(getServiceEndpointURI());",
"SOAP_FAULT = new SoapFault(EXCEPTION_MESSAGE, SoapFault.FAULT_CODE_CLIENT); Element detail = SOAP_FAULT.getOrCreateDetail(); Document doc = detail.getOwnerDocument(); Text tn = doc.createTextNode(DETAIL_TEXT); detail.appendChild(tn);",
"from(routerEndpointURI).setFaultBody(constant(SOAP_FAULT));",
"from(routerEndpointURI).process(new Processor() { public void process(Exchange exchange) throws Exception { Message out = exchange.getOut(); // Set the message body with the out.setBody(this.getClass().getResourceAsStream(\"SoapFaultMessage.xml\")); // Set the response code here out.setHeader(org.apache.cxf.message.Message.RESPONSE_CODE, new Integer(500)); } });",
"CxfExchange exchange = (CxfExchange)template.send(getJaxwsEndpointUri(), new Processor() { public void process(final Exchange exchange) { final List<String> params = new ArrayList<String>(); params.add(TEST_MESSAGE); // Set the request context to the inMessage Map<String, Object> requestContext = new HashMap<String, Object>(); requestContext.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, JAXWS_SERVER_ADDRESS); exchange.getIn().setBody(params); exchange.getIn().setHeader(Client.REQUEST_CONTEXT , requestContext); exchange.getIn().setHeader(CxfConstants.OPERATION_NAME, GREET_ME_OPERATION); } }); org.apache.camel.Message out = exchange.getOut(); // The output is an object array, the first element of the array is the return value Object\\[\\] output = out.getBody(Object\\[\\].class); LOG.info(\"Received output text: \" + output\\[0\\]); // Get the response context form outMessage Map<String, Object> responseContext = CastUtils.cast((Map)out.getHeader(Client.RESPONSE_CONTEXT)); assertNotNull(responseContext); assertEquals(\"Get the wrong wsdl operation name\", \"{http://apache.org/hello_world_soap_http}greetMe\", responseContext.get(\"javax.xml.ws.wsdl.operation\").toString());",
"DataHandler Message.getAttachment(String id)",
"@Bean public CxfEndpoint routerEndpoint() { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceNameAsQName(SERVICE_QNAME); cxfEndpoint.setEndpointNameAsQName(PORT_QNAME); cxfEndpoint.setAddress(\"/\" + getClass().getSimpleName()+ \"/jaxws-mtom/hello\"); cxfEndpoint.setWsdlURL(\"mtom.wsdl\"); Map<String, Object> properties = new HashMap<String, Object>(); properties.put(\"dataFormat\", \"PAYLOAD\"); properties.put(\"mtom-enabled\", true); cxfEndpoint.setProperties(properties); return cxfEndpoint; }",
"Exchange exchange = context.createProducerTemplate().send(\"direct:testEndpoint\", new Processor() { public void process(Exchange exchange) throws Exception { exchange.setPattern(ExchangePattern.InOut); List<Source> elements = new ArrayList<Source>(); elements.add(new DOMSource(DOMUtils.readXml(new StringReader(MtomTestHelper.REQ_MESSAGE)).getDocumentElement())); CxfPayload<SoapHeader> body = new CxfPayload<SoapHeader>(new ArrayList<SoapHeader>(), elements, null); exchange.getIn().setBody(body); exchange.getIn().addAttachment(MtomTestHelper.REQ_PHOTO_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.REQ_PHOTO_DATA, \"application/octet-stream\"))); exchange.getIn().addAttachment(MtomTestHelper.REQ_IMAGE_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.requestJpeg, \"image/jpeg\"))); } }); // process response CxfPayload<SoapHeader> out = exchange.getOut().getBody(CxfPayload.class); Assert.assertEquals(1, out.getBody().size()); Map<String, String> ns = new HashMap<String, String>(); ns.put(\"ns\", MtomTestHelper.SERVICE_TYPES_NS); ns.put(\"xop\", MtomTestHelper.XOP_NS); XPathUtils xu = new XPathUtils(ns); Element oute = new XmlConverter().toDOMElement(out.getBody().get(0)); Element ele = (Element)xu.getValue(\"//ns:DetailResponse/ns:photo/xop:Include\", oute, XPathConstants.NODE); String photoId = ele.getAttribute(\"href\").substring(4); // skip \"cid:\" ele = (Element)xu.getValue(\"//ns:DetailResponse/ns:image/xop:Include\", oute, XPathConstants.NODE); String imageId = ele.getAttribute(\"href\").substring(4); // skip \"cid:\" DataHandler dr = exchange.getOut().getAttachment(photoId); Assert.assertEquals(\"application/octet-stream\", dr.getContentType()); MtomTestHelper.assertEquals(MtomTestHelper.RESP_PHOTO_DATA, IOUtils.readBytesFromStream(dr.getInputStream())); dr = exchange.getOut().getAttachment(imageId); Assert.assertEquals(\"image/jpeg\", dr.getContentType()); BufferedImage image = ImageIO.read(dr.getInputStream()); Assert.assertEquals(560, image.getWidth()); Assert.assertEquals(300, image.getHeight());",
"public static class MyProcessor implements Processor { @SuppressWarnings(\"unchecked\") public void process(Exchange exchange) throws Exception { CxfPayload<SoapHeader> in = exchange.getIn().getBody(CxfPayload.class); // verify request Assert.assertEquals(1, in.getBody().size()); Map<String, String> ns = new HashMap<String, String>(); ns.put(\"ns\", MtomTestHelper.SERVICE_TYPES_NS); ns.put(\"xop\", MtomTestHelper.XOP_NS); XPathUtils xu = new XPathUtils(ns); Element body = new XmlConverter().toDOMElement(in.getBody().get(0)); Element ele = (Element)xu.getValue(\"//ns:Detail/ns:photo/xop:Include\", body, XPathConstants.NODE); String photoId = ele.getAttribute(\"href\").substring(4); // skip \"cid:\" Assert.assertEquals(MtomTestHelper.REQ_PHOTO_CID, photoId); ele = (Element)xu.getValue(\"//ns:Detail/ns:image/xop:Include\", body, XPathConstants.NODE); String imageId = ele.getAttribute(\"href\").substring(4); // skip \"cid:\" Assert.assertEquals(MtomTestHelper.REQ_IMAGE_CID, imageId); DataHandler dr = exchange.getIn().getAttachment(photoId); Assert.assertEquals(\"application/octet-stream\", dr.getContentType()); MtomTestHelper.assertEquals(MtomTestHelper.REQ_PHOTO_DATA, IOUtils.readBytesFromStream(dr.getInputStream())); dr = exchange.getIn().getAttachment(imageId); Assert.assertEquals(\"image/jpeg\", dr.getContentType()); MtomTestHelper.assertEquals(MtomTestHelper.requestJpeg, IOUtils.readBytesFromStream(dr.getInputStream())); // create response List<Source> elements = new ArrayList<Source>(); elements.add(new DOMSource(DOMUtils.readXml(new StringReader(MtomTestHelper.RESP_MESSAGE)).getDocumentElement())); CxfPayload<SoapHeader> sbody = new CxfPayload<SoapHeader>(new ArrayList<SoapHeader>(), elements, null); exchange.getOut().setBody(sbody); exchange.getOut().addAttachment(MtomTestHelper.RESP_PHOTO_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.RESP_PHOTO_DATA, \"application/octet-stream\"))); exchange.getOut().addAttachment(MtomTestHelper.RESP_IMAGE_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.responseJpeg, \"image/jpeg\"))); } }",
"<cxf:cxfEndpoint id=\"testEndpoint\" address=\"http://localhost:9000/SoapContext/SoapAnyPort\"> <cxf:properties> <entry key=\"dataFormat\" value=\"PAYLOAD\"/> </cxf:properties> </cxf:cxfEndpoint>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-cxf-component-starter |
Appendix D. Red Hat Virtualization and Encrypted Communication | Appendix D. Red Hat Virtualization and Encrypted Communication D.1. Replacing the Red Hat Virtualization Manager CA Certificate Warning Do not change the permissions and ownerships for the /etc/pki directory or any subdirectories. The permission for the /etc/pki and the /etc/pki/ovirt-engine directory must remain as the default, 755 . You can configure your organization's third-party CA certificate to identify the Red Hat Virtualization Manager to users connecting over HTTPS. Note Using a third-party CA certificate for HTTPS connections does not affect the certificate used for authentication between the Manager and hosts. They will continue to use the self-signed certificate generated by the Manager. Prerequisites A third-party CA certificate. This is the certificate of the CA (Certificate Authority) that issued the certificate you want to use. It is provided as a PEM file. The certificate chain must be complete up to the root certificate. The chain's order is critical and must be from the last intermediate certificate to the root certificate. This procedure assumes that the third-party CA certificate is provided in /tmp/3rd-party-ca-cert.pem . The private key that you want to use for Apache httpd. It must not have a password. This procedure assumes that it is located in /tmp/apache.key . The certificate issued by the CA. This procedure assumes that it is located in /tmp/apache.cer . If you received the private key and certificate from your CA in a P12 file, use the following procedure to extract them. For other file formats, contact your CA. After extracting the private key and certificate, proceed to Replacing the Red Hat Virtualization Manager Apache CA Certificate . Extracting the Certificate and Private Key from a P12 Bundle The internal CA stores the internally generated key and certificate in a P12 file, in /etc/pki/ovirt-engine/keys/apache.p12 . Red Hat recommends storing your new file in the same location. The following procedure assumes that the new P12 file is in /tmp/apache.p12 . Back up the current apache.p12 file: Replace the current file with the new file: Extract the private key and certificate to the required locations. If the file is password protected, you must add -passin pass:_password_ , replacing password with the required password. Important For new Red Hat Virtualization installations, you must complete all of the steps in this procedure. If you upgraded from a Red Hat Enterprise Virtualization 3.6 environment with a commercially signed certificate already configured, only steps 1, 8, and 9 are required. Replacing the Red Hat Virtualization Manager Apache CA Certificate If you are using a self-hosted engine, put the environment into global maintenance mode. For more information, see Section 15.1, "Maintaining the Self-Hosted Engine" . Add your CA certificate to the host-wide trust store: The Manager has been configured to use /etc/pki/ovirt-engine/apache-ca.pem , which is symbolically linked to /etc/pki/ovirt-engine/ca.pem . Remove the symbolic link: Save your CA certificate as /etc/pki/ovirt-engine/apache-ca.pem : Back up the existing private key and certificate: Copy the private key to the required location: Set the private key owner to root and set the permissions to 0640 : Copy the certificate to the required location: Restart the Apache server: Create a new trust store configuration file, /etc/ovirt-engine/engine.conf.d/99-custom-truststore.conf , with the following parameters: Copy the /etc/ovirt-engine/ovirt-websocket-proxy.conf.d/10-setup.conf file, and rename it with an index number that is greater than 10 (for example, 99-setup.conf ). Add the following parameters to the new file: Restart the websocket-proxy service: If you manually changed the /etc/ovirt-provider-ovn/conf.d/10-setup-ovirt-provider-ovn.conf file, or are using a configuration file from an older installation, make sure that the Manager is still configured to use /etc/pki/ovirt-engine/apache-ca.pem as the certificate source. Enable engine-backup to update the system on restore by creating a new file, /etc/ovirt-engine-backup/engine-backup-config.d/update-system-wide-pki.sh , with the following content: Restart the ovirt-provider-ovn service: Restart the ovirt-engine service: If you are using a self-hosted engine, turn off global maintenance mode. Your users can now connect to the Administration Portal and VM Portal, without seeing a warning about the authenticity of the certificate used to encrypt HTTPS traffic. | [
"cp -p /etc/pki/ovirt-engine/keys/apache.p12 /etc/pki/ovirt-engine/keys/apache.p12.bck",
"cp /tmp/apache.p12 /etc/pki/ovirt-engine/keys/apache.p12",
"openssl pkcs12 -in /etc/pki/ovirt-engine/keys/apache.p12 -nocerts -nodes > /tmp/apache.key openssl pkcs12 -in /etc/pki/ovirt-engine/keys/apache.p12 -nokeys > /tmp/apache.cer",
"hosted-engine --set-maintenance --mode=global",
"cp /tmp/ 3rd-party-ca-cert .pem /etc/pki/ca-trust/source/anchors update-ca-trust",
"rm /etc/pki/ovirt-engine/apache-ca.pem",
"cp /tmp/ 3rd-party-ca-cert .pem /etc/pki/ovirt-engine/apache-ca.pem",
"cp /etc/pki/ovirt-engine/keys/apache.key.nopass /etc/pki/ovirt-engine/keys/apache.key.nopass.bck cp /etc/pki/ovirt-engine/certs/apache.cer /etc/pki/ovirt-engine/certs/apache.cer.bck",
"cp /tmp/apache.key /etc/pki/ovirt-engine/keys/apache.key.nopass",
"chown root:ovirt /etc/pki/ovirt-engine/keys/apache.key.nopass chmod 640 /etc/pki/ovirt-engine/keys/apache.key.nopass",
"cp /tmp/apache.cer /etc/pki/ovirt-engine/certs/apache.cer",
"systemctl restart httpd.service",
"ENGINE_HTTPS_PKI_TRUST_STORE=\"/etc/pki/java/cacerts\" ENGINE_HTTPS_PKI_TRUST_STORE_PASSWORD=\"\"",
"SSL_CERTIFICATE=/etc/pki/ovirt-engine/certs/apache.cer SSL_KEY=/etc/pki/ovirt-engine/keys/apache.key.nopass",
"systemctl restart ovirt-websocket-proxy.service",
"BACKUP_PATHS=\"USD{BACKUP_PATHS} /etc/ovirt-engine-backup\" cp -f /etc/pki/ovirt-engine/apache-ca.pem /etc/pki/ca-trust/source/anchors/ 3rd-party-ca-cert .pem update-ca-trust",
"systemctl restart ovirt-provider-ovn.service",
"systemctl restart ovirt-engine.service",
"hosted-engine --set-maintenance --mode=none"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/appe-Red_Hat_Enterprise_Virtualization_and_SSL |
1.8.3. Routing Methods | 1.8.3. Routing Methods You can use Network Address Translation (NAT) routing or direct routing with LVS. The following sections briefly describe NAT routing and direct routing with LVS. 1.8.3.1. NAT Routing Figure 1.23, "LVS Implemented with NAT Routing" , illustrates LVS using NAT routing to move requests between the Internet and a private network. Figure 1.23. LVS Implemented with NAT Routing In the example, there are two NICs in the active LVS router. The NIC for the Internet has a real IP address on eth0 and has a floating IP address aliased to eth0:1. The NIC for the private network interface has a real IP address on eth1 and has a floating IP address aliased to eth1:1. In the event of failover, the virtual interface facing the Internet and the private facing virtual interface are taken over by the backup LVS router simultaneously. All the real servers on the private network use the floating IP for the NAT router as their default route to communicate with the active LVS router so that their abilities to respond to requests from the Internet is not impaired. In the example, the LVS router's public LVS floating IP address and private NAT floating IP address are aliased to two physical NICs. While it is possible to associate each floating IP address to its physical device on the LVS router nodes, having more than two NICs is not a requirement. Using this topology, the active LVS router receives the request and routes it to the appropriate server. The real server then processes the request and returns the packets to the LVS router. The LVS router uses network address translation to replace the address of the real server in the packets with the LVS routers public VIP address. This process is called IP masquerading because the actual IP addresses of the real servers is hidden from the requesting clients. Using NAT routing, the real servers can be any kind of computers running a variety operating systems. The main disadvantage of NAT routing is that the LVS router may become a bottleneck in large deployments because it must process outgoing and incoming requests. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_suite_overview/s2-lvs-routing-CSO |
Integrating | Integrating Red Hat Advanced Cluster Security for Kubernetes 4.5 Integrating Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift Documentation Team | [
"eksctl utils associate-iam-oidc-provider --cluster <cluster name> --approve",
"kubectl -n stackrox annotate sa central eks.amazonaws.com/role-arn=arn:aws:iam::67890:role/<role-name>",
"kubectl -n stackrox delete pod -l app=central",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": \"sts:AssumeRole\", \"Resource\": \"arn:aws:iam::<ecr-registry>:role/<assumerole-readonly>\" 1 } ] }",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": [ \"arn:aws:iam::<ecr-registry>:role/<role-name>\" 1 ] }, \"Action\": \"sts:AssumeRole\" } ] }",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": \"sts:AssumeRole\", \"Resource\": \"arn:aws:iam::<ecr-registry>:role/<assumerole-readonly>\" 1 } ] }",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": [ \"arn:aws:iam::<ecr-registry>:user/<role-name>\" ] }, \"Action\": \"sts:AssumeRole\" } ] }",
"export ROX_API_TOKEN=<api_token>",
"roxctl central debug dump --token-file <token_file>",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Linux/roxctlUSD{arch}\"",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Darwin/roxctlUSD{arch}\"",
"xattr -c roxctl",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Windows/roxctl.exe",
"roxctl version",
"docker login registry.redhat.io",
"docker pull registry.redhat.io/advanced-cluster-security/rhacs-roxctl-rhel8:4.5.6",
"docker run -e ROX_API_TOKEN=USDROX_API_TOKEN -it registry.redhat.io/advanced-cluster-security/rhacs-roxctl-rhel8:4.5.6 -e USDROX_CENTRAL_ADDRESS <command>",
"docker run -it registry.redhat.io/advanced-cluster-security/rhacs-roxctl-rhel8:4.5.6 version",
"version: 2 jobs: check-policy-compliance: docker: - image: 'circleci/node:latest' auth: username: USDDOCKERHUB_USER password: USDDOCKERHUB_PASSWORD steps: - checkout - run: name: Install roxctl command: | curl -H \"Authorization: Bearer USDROX_API_TOKEN\" https://USDSTACKROX_CENTRAL_HOST:443/api/cli/download/roxctl-linux -o roxctl && chmod +x ./roxctl - run: name: Scan images for policy deviations and vulnerabilities command: | ./roxctl image check --endpoint \"USDSTACKROX_CENTRAL_HOST:443\" --image \"<your_registry/repo/image_name>\" 1 - run: name: Scan deployment files for policy deviations command: | ./roxctl image check --endpoint \"USDSTACKROX_CENTRAL_HOST:443\" --image \"<your_deployment_file>\" 2 # Important note: This step assumes the YAML file you'd like to test is located in the project. workflows: version: 2 build_and_test: jobs: - check-policy-compliance",
"example.com/slack-webhook: https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX",
"{ \"alert\": { \"id\": \"<id>\", \"time\": \"<timestamp>\", \"policy\": { \"name\": \"<name>\", }, }, \"<custom_field_1>\": \"<custom_value_1>\" }",
"annotations: jira/project-key: <jira_project_key>",
"{ \"customfield_10004\": 3, \"customfield_20005\": \"Alerts\", }",
"annotations: jira/project-key: <jira_project_key>",
"roxctl -e \"USDROX_CENTRAL_ADDRESS\" central debug log --level Debug --modules notifiers/jira",
"roxctl -e \"USDROX_CENTRAL_ADDRESS\" central debug dump",
"roxctl -e \"USDROX_CENTRAL_ADDRESS\" central debug log --level Info",
"annotations: email: <email_address>",
"annotations: email: <email_address>",
"eksctl utils associate-iam-oidc-provider --cluster <cluster_name> --approve",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": [ \"ecr:BatchCheckLayerAvailability\", \"ecr:BatchGetImage\", \"ecr:DescribeImages\", \"ecr:DescribeRepositories\", \"ecr:GetAuthorizationToken\", \"ecr:GetDownloadUrlForLayer\", \"ecr:ListImages\" ], \"Resource\": \"arn:aws:iam::<ecr_registry>:role/<role_name>\" } ] }",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": [ \"arn:aws:iam::<ecr-registry>:role/<role_name>\" 1 ] }, \"Action\": \"sts:AssumeRole\" } ] }",
"oc -n stackrox annotate sa central eks.amazonaws.com/role-arn=arn:aws:iam::67890:role/<role_name> 1",
"oc -n stackrox delete pod -l \"app in (central,sensor)\" 1",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": [ \"ecr:BatchCheckLayerAvailability\", \"ecr:BatchGetImage\", \"ecr:DescribeImages\", \"ecr:DescribeRepositories\", \"ecr:GetAuthorizationToken\", \"ecr:GetDownloadUrlForLayer\", \"ecr:ListImages\" ], \"Resource\": \"arn:aws:iam::<ecr_registry>:role/<role_name>\" } ] }",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"<oidc_provider_arn>\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"<oidc_provider_name>:aud\": \"openshift\" } } } ] }",
"AWS_ROLE_ARN=<role_arn> AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/openshift/serviceaccount/token",
"oc annotate serviceaccount \\ 1 central \\ 2 --namespace stackrox iam.gke.io/gcp-service-account=<GSA_NAME>@<GSA_PROJECT>.iam.gserviceaccount.com",
"gcloud iam workload-identity-pools create rhacs-pool --location=\"global\" --display-name=\"RHACS workload pool\"",
"gcloud iam workload-identity-pools providers create-oidc rhacs-provider --location=\"global\" --workload-identity-pool=\"rhacs-pool\" --display-name=\"RHACS provider\" --attribute-mapping=\"google.subject=assertion.sub\" --issuer-uri=\"https://<oidc_configuration_url>\" --allowed-audiences=openshift",
"gcloud iam service-accounts add-iam-policy-binding <GSA_NAME>@<GSA_PROJECT>.iam.gserviceaccount.com --role roles/iam.workloadIdentityUser --member=\"principal://iam.googleapis.com/projects/<GSA_PROJECT_NUMBER>/locations/global/workloadIdentityPools/rhacs-provider/subject/system:serviceaccount:stackrox:central\" 1",
"{ \"type\": \"external_account\", \"audience\": \"//iam.googleapis.com/projects/<GSA_PROJECT_ID>/locations/global/workloadIdentityPools/rhacs-pool/providers/rhacs-provider\", \"subject_token_type\": \"urn:ietf:params:oauth:token-type:jwt\", \"token_url\": \"https://sts.googleapis.com/v1/token\", \"service_account_impersonation_url\": \"https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/<GSA_NAME>@<GSA_PROJECT>.iam.gserviceaccount.com:generateAccessToken\", \"credential_source\": { \"file\": \"/var/run/secrets/openshift/serviceaccount/token\", \"format\": { \"type\": \"text\" } } }",
"apiVersion: v1 kind: Secret metadata: name: gcp-cloud-credentials namespace: stackrox data: credentials: <base64_encoded_json>"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html-single/integrating/index |
3.6. Example Custom Authentication Module | 3.6. Example Custom Authentication Module Suppose you are working on a project where user names and passwords are stored in a relational database; however, the passwords are base64 encoded, so you can't use the DatabaseServerLoginModule module directly. You can provide a subclass: To use this new module, you will need to declare a new security domain in the server configuration file: After that, configure the transport to use the security domain with the new authentication module: Note DatabaseServerLoginModule is in the picketbox JAR (moved from jbosssx in earlier versions). Maven pom.xml dependency example for Red Hat JBoss EAP: | [
"public class MyLoginModule extends DatabaseServerLoginModule { protected String convertRawPassword(String password) { try { return new String((new sun.misc.BASE64Decoder()).decodeBuffer(password)); } catch (IOException e) { return password; } } }",
"<security-domain name=\"my-security-domain\"> <authentication> <login-module code=\"com.mycompany.MyLoginModule\" flag=\"required\"> <module-option name=\"dsJndiName\">java:MyDataSource</module-option> <module-option name=\"principalsQuery\">select password from usertable where login=?</module-option> <module-option name=\"rolesQuery\">select role, 'Roles' from users, userroles where login=? and users.roleId=userroles.roleId</module-option> </login-module> </authentication> </security-domain>",
"<transport name=\"jdbc\" protocol=\"teiid\" socket-binding=\"teiid-jdbc\"> <authentication security-domain=\"my-security-domain\"/> </transport>",
"<dependency> <groupId>org.picketbox</groupId> <artifactId>picketbox</artifactId> <version>4.1.1.Final-redhat-1</version> <scope>provided</scope> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/security_guide/example_custom_authentication_module |
Chapter 1. Introduction | Chapter 1. Introduction Important Using the Red Hat Ceph file system (CephFS) through the native CephFS protocol is available only as a Technology Preview , and therefore is not fully supported by Red Hat. The deployment scenario described in this document should only be used for testing and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . Note Red Hat supports using CephFS through NFS. For more information, see Deploying the Shared File Systems service with CephFS through NFS . With the OpenStack Shared File Systems service (manila) you can provision shared file systems that can be consumed by multiple compute instances. This release includes a technology preview of the necessary driver for Red Hat CephFS (namely, manila.share.drivers.cephfs.cephfs_native.CephFSNativeDriver ). This driver allows the Shared File System service to use CephFS as a back end. The recommended method for configuring a Shared File System back end is through the director. Doing so involves writing a custom environment file. With this release, the director can now deploy the Shared File System with a CephFS back end on the overcloud. This document explains how to do so. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/cephfs_back_end_guide_for_the_shared_file_system_service/intro |
Chapter 6. Working with nodes | Chapter 6. Working with nodes 6.1. Viewing and listing the nodes in your OpenShift Container Platform cluster You can list all the nodes in your cluster to obtain information such as status, age, memory usage, and details about the nodes. When you perform node management operations, the CLI interacts with node objects that are representations of actual node hosts. The master uses the information from node objects to validate nodes with health checks. 6.1.1. About listing all the nodes in a cluster You can get detailed information on the nodes in the cluster. The following command lists all nodes: USD oc get nodes The following example is a cluster with healthy nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.30.3 node1.example.com Ready worker 7h v1.30.3 node2.example.com Ready worker 7h v1.30.3 The following example is a cluster with one unhealthy node: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.30.3 node1.example.com NotReady,SchedulingDisabled worker 7h v1.30.3 node2.example.com Ready worker 7h v1.30.3 The conditions that trigger a NotReady status are shown later in this section. The -o wide option provides additional information on nodes. USD oc get nodes -o wide Example output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master.example.com Ready master 171m v1.30.3 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.30.3-30.rhaos4.10.gitf2f339d.el8-dev node1.example.com Ready worker 72m v1.30.3 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.30.3-30.rhaos4.10.gitf2f339d.el8-dev node2.example.com Ready worker 164m v1.30.3 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.30.3-30.rhaos4.10.gitf2f339d.el8-dev The following command lists information about a single node: USD oc get node <node> For example: USD oc get node node1.example.com Example output NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.30.3 The following command provides more detailed information about a specific node, including the reason for the current condition: USD oc describe node <node> For example: USD oc describe node node1.example.com Note The following example contains some values that are specific to OpenShift Container Platform on AWS. Example output Name: node1.example.com 1 Roles: worker 2 Labels: kubernetes.io/os=linux kubernetes.io/hostname=ip-10-0-131-14 kubernetes.io/arch=amd64 3 node-role.kubernetes.io/worker= node.kubernetes.io/instance-type=m4.large node.openshift.io/os_id=rhcos node.openshift.io/os_version=4.5 region=east topology.kubernetes.io/region=us-east-1 topology.kubernetes.io/zone=us-east-1a Annotations: cluster.k8s.io/machine: openshift-machine-api/ahardin-worker-us-east-2a-q5dzc 4 machineconfiguration.openshift.io/currentConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/desiredConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 13 Feb 2019 11:05:57 -0500 Taints: <none> 5 Unschedulable: false Conditions: 6 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status Addresses: 7 InternalIP: 10.0.140.16 InternalDNS: ip-10-0-140-16.us-east-2.compute.internal Hostname: ip-10-0-140-16.us-east-2.compute.internal Capacity: 8 attachable-volumes-aws-ebs: 39 cpu: 2 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8172516Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7558116Ki pods: 250 System Info: 9 Machine ID: 63787c9534c24fde9a0cde35c13f1f66 System UUID: EC22BF97-A006-4A58-6AF8-0A38DEEA122A Boot ID: f24ad37d-2594-46b4-8830-7f7555918325 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.30.3-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 Kubelet Version: v1.30.3 Kube-Proxy Version: v1.30.3 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (12 in total) 10 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- openshift-cluster-node-tuning-operator tuned-hdl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-dns dns-default-l69zr 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-image-registry node-ca-9hmcg 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ingress router-default-76455c45c-c5ptv 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-machine-config-operator machine-config-daemon-cvqw9 20m (1%) 0 (0%) 50Mi (0%) 0 (0%) openshift-marketplace community-operators-f67fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-monitoring alertmanager-main-0 50m (3%) 50m (3%) 210Mi (2%) 10Mi (0%) openshift-monitoring node-exporter-l7q8d 10m (0%) 20m (1%) 20Mi (0%) 40Mi (0%) openshift-monitoring prometheus-adapter-75d769c874-hvb85 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-multus multus-kw8w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ovn-kubernetes ovnkube-node-t4dsn 80m (0%) 0 (0%) 1630Mi (0%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 380m (25%) 270m (18%) memory 880Mi (11%) 250Mi (3%) attachable-volumes-aws-ebs 0 0 Events: 11 Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet. #... 1 The name of the node. 2 The role of the node, either master or worker . 3 The labels applied to the node. 4 The annotations applied to the node. 5 The taints applied to the node. 6 The node conditions and status. The conditions stanza lists the Ready , PIDPressure , MemoryPressure , DiskPressure and OutOfDisk status. These condition are described later in this section. 7 The IP address and hostname of the node. 8 The pod resources and allocatable resources. 9 Information about the node host. 10 The pods on the node. 11 The events reported by the node. Note The control plane label is not automatically added to newly created or updated master nodes. If you want to use the control plane label for your nodes, you can manually configure the label. For more information, see Understanding how to update labels on nodes in the Additional resources section. Among the information shown for nodes, the following node conditions appear in the output of the commands shown in this section: Table 6.1. Node Conditions Condition Description Ready If true , the node is healthy and ready to accept pods. If false , the node is not healthy and is not accepting pods. If unknown , the node controller has not received a heartbeat from the node for the node-monitor-grace-period (the default is 40 seconds). DiskPressure If true , the disk capacity is low. MemoryPressure If true , the node memory is low. PIDPressure If true , there are too many processes on the node. OutOfDisk If true , the node has insufficient free space on the node for adding new pods. NetworkUnavailable If true , the network for the node is not correctly configured. NotReady If true , one of the underlying components, such as the container runtime or network, is experiencing issues or is not yet configured. SchedulingDisabled Pods cannot be scheduled for placement on the node. Additional resources Understanding how to update labels on nodes 6.1.2. Listing pods on a node in your cluster You can list all the pods on a specific node. Procedure To list all or selected pods on selected nodes: USD oc get pod --selector=<nodeSelector> USD oc get pod --selector=kubernetes.io/os Or: USD oc get pod -l=<nodeSelector> USD oc get pod -l kubernetes.io/os=linux To list all pods on a specific node, including terminated pods: USD oc get pod --all-namespaces --field-selector=spec.nodeName=<nodename> 6.1.3. Viewing memory and CPU usage statistics on your nodes You can display usage statistics about nodes, which provide the runtime environments for containers. These usage statistics include CPU, memory, and storage consumption. Prerequisites You must have cluster-reader permission to view the usage statistics. Metrics must be installed to view the usage statistics. Procedure To view the usage statistics: USD oc adm top nodes Example output NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61% ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18% ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33% ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82% ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45% ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77% ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72% To view the usage statistics for nodes with labels: USD oc adm top node --selector='' You must choose the selector (label query) to filter on. Supports = , == , and != . 6.2. Working with nodes As an administrator, you can perform several tasks to make your clusters more efficient. 6.2.1. Understanding how to evacuate pods on nodes Evacuating pods allows you to migrate all or selected pods from a given node or nodes. You can only evacuate pods backed by a replication controller. The replication controller creates new pods on other nodes and removes the existing pods from the specified node(s). Bare pods, meaning those not backed by a replication controller, are unaffected by default. You can evacuate a subset of pods by specifying a pod-selector. Pod selectors are based on labels, so all the pods with the specified label will be evacuated. Procedure Mark the nodes unschedulable before performing the pod evacuation. Mark the node as unschedulable: USD oc adm cordon <node1> Example output node/<node1> cordoned Check that the node status is Ready,SchedulingDisabled : USD oc get node <node1> Example output NAME STATUS ROLES AGE VERSION <node1> Ready,SchedulingDisabled worker 1d v1.30.3 Evacuate the pods using one of the following methods: Evacuate all or selected pods on one or more nodes: USD oc adm drain <node1> <node2> [--pod-selector=<pod_selector>] Force the deletion of bare pods using the --force option. When set to true , deletion continues even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set: USD oc adm drain <node1> <node2> --force=true Set a period of time in seconds for each pod to terminate gracefully, use --grace-period . If negative, the default value specified in the pod will be used: USD oc adm drain <node1> <node2> --grace-period=-1 Ignore pods managed by daemon sets using the --ignore-daemonsets flag set to true : USD oc adm drain <node1> <node2> --ignore-daemonsets=true Set the length of time to wait before giving up using the --timeout flag. A value of 0 sets an infinite length of time: USD oc adm drain <node1> <node2> --timeout=5s Delete pods even if there are pods using emptyDir volumes by setting the --delete-emptydir-data flag to true . Local data is deleted when the node is drained: USD oc adm drain <node1> <node2> --delete-emptydir-data=true List objects that will be migrated without actually performing the evacuation, using the --dry-run option set to true : USD oc adm drain <node1> <node2> --dry-run=true Instead of specifying specific node names (for example, <node1> <node2> ), you can use the --selector=<node_selector> option to evacuate pods on selected nodes. Mark the node as schedulable when done. USD oc adm uncordon <node1> 6.2.2. Understanding how to update labels on nodes You can update any label on a node. Node labels are not persisted after a node is deleted even if the node is backed up by a Machine. Note Any change to a MachineSet object is not applied to existing machines owned by the compute machine set. For example, labels edited or added to an existing MachineSet object are not propagated to existing machines and nodes associated with the compute machine set. The following command adds or updates labels on a node: USD oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n> For example: USD oc label nodes webconsole-7f7f6 unhealthy=true Tip You can alternatively apply the following YAML to apply the label: kind: Node apiVersion: v1 metadata: name: webconsole-7f7f6 labels: unhealthy: 'true' #... The following command updates all pods in the namespace: USD oc label pods --all <key_1>=<value_1> For example: USD oc label pods --all status=unhealthy 6.2.3. Understanding how to mark nodes as unschedulable or schedulable By default, healthy nodes with a Ready status are marked as schedulable, which means that you can place new pods on the node. Manually marking a node as unschedulable blocks any new pods from being scheduled on the node. Existing pods on the node are not affected. The following command marks a node or nodes as unschedulable: Example output USD oc adm cordon <node> For example: USD oc adm cordon node1.example.com Example output node/node1.example.com cordoned NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled The following command marks a currently unschedulable node or nodes as schedulable: USD oc adm uncordon <node1> Alternatively, instead of specifying specific node names (for example, <node> ), you can use the --selector=<node_selector> option to mark selected nodes as schedulable or unschedulable. 6.2.4. Handling errors in single-node OpenShift clusters when the node reboots without draining application pods In single-node OpenShift clusters and in OpenShift Container Platform clusters in general, a situation can arise where a node reboot occurs without first draining the node. This can occur where an application pod requesting devices fails with the UnexpectedAdmissionError error. Deployment , ReplicaSet , or DaemonSet errors are reported because the application pods that require those devices start before the pod serving those devices. You cannot control the order of pod restarts. While this behavior is to be expected, it can cause a pod to remain on the cluster even though it has failed to deploy successfully. The pod continues to report UnexpectedAdmissionError . This issue is mitigated by the fact that application pods are typically included in a Deployment , ReplicaSet , or DaemonSet . If a pod is in this error state, it is of little concern because another instance should be running. Belonging to a Deployment , ReplicaSet , or DaemonSet guarantees the successful creation and execution of subsequent pods and ensures the successful deployment of the application. There is ongoing work upstream to ensure that such pods are gracefully terminated. Until that work is resolved, run the following command for a single-node OpenShift cluster to remove the failed pods: USD oc delete pods --field-selector status.phase=Failed -n <POD_NAMESPACE> Note The option to drain the node is unavailable for single-node OpenShift clusters. Additional resources Understanding how to evacuate pods on nodes 6.2.5. Deleting nodes 6.2.5.1. Deleting nodes from a cluster To delete a node from the OpenShift Container Platform cluster, scale down the appropriate MachineSet object. Important When a cluster is integrated with a cloud provider, you must delete the corresponding machine to delete a node. Do not try to use the oc delete node command for this task. When you delete a node by using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods that are not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods. Note If you are running cluster on bare metal, you cannot delete a node by editing MachineSet objects. Compute machine sets are only available when a cluster is integrated with a cloud provider. Instead you must unschedule and drain the node before manually deleting it. Procedure View the compute machine sets that are in the cluster by running the following command: USD oc get machinesets -n openshift-machine-api The compute machine sets are listed in the form of <cluster-id>-worker-<aws-region-az> . Scale down the compute machine set by using one of the following methods: Specify the number of replicas to scale down to by running the following command: USD oc scale --replicas=2 machineset <machine-set-name> -n openshift-machine-api Edit the compute machine set custom resource by running the following command: USD oc edit machineset <machine-set-name> -n openshift-machine-api Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # ... name: <machine-set-name> namespace: openshift-machine-api # ... spec: replicas: 2 1 # ... 1 Specify the number of replicas to scale down to. Additional resources Manually scaling a compute machine set 6.2.5.2. Deleting nodes from a bare metal cluster When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods. Procedure Delete a node from an OpenShift Container Platform cluster running on bare metal by completing the following steps: Mark the node as unschedulable: USD oc adm cordon <node_name> Drain all pods on the node: USD oc adm drain <node_name> --force=true This step might fail if the node is offline or unresponsive. Even if the node does not respond, it might still be running a workload that writes to shared storage. To avoid data corruption, power down the physical hardware before you proceed. Delete the node from the cluster: USD oc delete node <node_name> Although the node object is now deleted from the cluster, it can still rejoin the cluster after reboot or if the kubelet service is restarted. To permanently delete the node and all its data, you must decommission the node . If you powered down the physical hardware, turn it back on so that the node can rejoin the cluster. 6.3. Managing nodes OpenShift Container Platform uses a KubeletConfig custom resource (CR) to manage the configuration of nodes. By creating an instance of a KubeletConfig object, a managed machine config is created to override setting on the node. Note Logging in to remote machines for the purpose of changing their configuration is not supported. 6.3.1. Modifying nodes To make configuration changes to a cluster, or machine pool, you must create a custom resource definition (CRD), or kubeletConfig object. OpenShift Container Platform uses the Machine Config Controller to watch for changes introduced through the CRD to apply the changes to the cluster. Note Because the fields in a kubeletConfig object are passed directly to the kubelet from upstream Kubernetes, the validation of those fields is handled directly by the kubelet itself. Please refer to the relevant Kubernetes documentation for the valid values for these fields. Invalid values in the kubeletConfig object can render cluster nodes unusable. Procedure Obtain the label associated with the static CRD, Machine Config Pool, for the type of node you want to configure. Perform one of the following steps: Check current labels of the desired machine config pool. For example: USD oc get machineconfigpool --show-labels Example output NAME CONFIG UPDATED UPDATING DEGRADED LABELS master rendered-master-e05b81f5ca4db1d249a1bf32f9ec24fd True False False operator.machineconfiguration.openshift.io/required-for-upgrade= worker rendered-worker-f50e78e1bc06d8e82327763145bfcf62 True False False Add a custom label to the desired machine config pool. For example: USD oc label machineconfigpool worker custom-kubelet=enabled Create a kubeletconfig custom resource (CR) for your configuration change. For example: Sample configuration for a custom-config CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: enabled 2 kubeletConfig: 3 podsPerCore: 10 maxPods: 250 systemReserved: cpu: 2000m memory: 1Gi #... 1 Assign a name to CR. 2 Specify the label to apply the configuration change, this is the label you added to the machine config pool. 3 Specify the new value(s) you want to change. Create the CR object. USD oc create -f <file-name> For example: USD oc create -f master-kube-config.yaml Most Kubelet Configuration options can be set by the user. The following options are not allowed to be overwritten: CgroupDriver ClusterDNS ClusterDomain StaticPodPath Note If a single node contains more than 50 images, pod scheduling might be imbalanced across nodes. This is because the list of images on a node is shortened to 50 by default. You can disable the image limit by editing the KubeletConfig object and setting the value of nodeStatusMaxImages to -1 . 6.3.2. Updating boot images The Machine Config Operator (MCO) uses a boot image to bring up a Red Hat Enterprise Linux CoreOS (RHCOS) node. By default, OpenShift Container Platform does not manage the boot image. This means that the boot image in your cluster is not updated along with your cluster. For example, if your cluster was originally created with OpenShift Container Platform 4.12, the boot image that the cluster uses to create nodes is the same 4.12 version, even if your cluster is at a later version. If the cluster is later upgraded to 4.13 or later, new nodes continue to scale with the same 4.12 image. This process could cause the following issues: Extra time to start up nodes Certificate expiration issues Version skew issues To avoid these issues, you can configure your cluster to update the boot image whenever you update your cluster. By modifying the MachineConfiguration object, you can enable this feature. Currently, the ability to update the boot image is available for only Google Cloud Platform (GCP) clusters and is not supported for Cluster CAPI Operator managed clusters. Important The updating boot image feature is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To view the current boot image used in your cluster, examine a machine set: Example machine set with the boot image reference apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: ci-ln-hmy310k-72292-5f87z-worker-a namespace: openshift-machine-api spec: # ... template: # ... spec: # ... providerSpec: # ... value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-412-85-202203181601-0-gcp-x86-64 1 # ... 1 This boot image is the same as the originally-installed OpenShift Container Platform version, in this example OpenShift Container Platform 4.12, regardless of the current version of the cluster. The way that the boot image is represented in the machine set depends on the platform, as the structure of the providerSpec field differs from platform to platform. If you configure your cluster to update your boot images, the boot image referenced in your machine sets matches the current version of the cluster. Prerequisites You have enabled the TechPreviewNoUpgrade feature set by using the feature gates. For more information, see "Enabling features using feature gates" in the "Additional resources" section. Procedure Edit the MachineConfiguration object, named cluster , to enable the updating of boot images by running the following command: USD oc edit MachineConfiguration cluster Optional: Configure the boot image update feature for all the machine sets: apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: # ... managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All 2 1 Activates the boot image update feature. 2 Specifies that all the machine sets in the cluster are to be updated. Optional: Configure the boot image update feature for specific machine sets: apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: # ... managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: Partial partial: machineResourceSelector: matchLabels: update-boot-image: "true" 2 1 Activates the boot image update feature. 2 Specifies that any machine set with this label is to be updated. Tip If an appropriate label is not present on the machine set, add a key/value pair by running a command similar to following: Verification Get the boot image version by running the following command: USD oc get machinesets <machineset_name> -n openshift-machine-api -o yaml Example machine set with the boot image reference apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: ci-ln-77hmkpt-72292-d4pxp update-boot-image: "true" name: ci-ln-77hmkpt-72292-d4pxp-worker-a namespace: openshift-machine-api spec: # ... template: # ... spec: # ... providerSpec: # ... value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-416-92-202402201450-0-gcp-x86-64 1 # ... 1 This boot image is the same as the current OpenShift Container Platform version. Additional resources Enabling features using feature gates 6.3.2.1. Disabling updated boot images To disable the updated boot image feature, edit the MachineConfiguration object to remove the managedBootImages stanza. If you disable this feature after some nodes have been created with the new boot image version, any existing nodes retain their current boot image. Turning off this feature does not rollback the nodes or machine sets to the originally-installed boot image. The machine sets retain the boot image version that was present when the feature was enabled and is not updated again when the cluster is upgraded to a new OpenShift Container Platform version in the future. Procedure Disable updated boot images by editing the MachineConfiguration object: USD oc edit MachineConfiguration cluster Remove the managedBootImages stanza: apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: # ... managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All 1 Remove the entire stanza to disable updated boot images. 6.3.3. Configuring control plane nodes as schedulable You can configure control plane nodes to be schedulable, meaning that new pods are allowed for placement on the master nodes. By default, control plane nodes are not schedulable. You can set the masters to be schedulable, but must retain the worker nodes. Note You can deploy OpenShift Container Platform with no worker nodes on a bare metal cluster. In this case, the control plane nodes are marked schedulable by default. You can allow or disallow control plane nodes to be schedulable by configuring the mastersSchedulable field. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become worker nodes. Procedure Edit the schedulers.config.openshift.io resource. USD oc edit schedulers.config.openshift.io cluster Configure the mastersSchedulable field. apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: "2019-09-10T03:04:05Z" generation: 1 name: cluster resourceVersion: "433" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: a636d30a-d377-11e9-88d4-0a60097bee62 spec: mastersSchedulable: false 1 status: {} #... 1 Set to true to allow control plane nodes to be schedulable, or false to disallow control plane nodes to be schedulable. Save the file to apply the changes. 6.3.4. Setting SELinux booleans OpenShift Container Platform allows you to enable and disable an SELinux boolean on a Red Hat Enterprise Linux CoreOS (RHCOS) node. The following procedure explains how to modify SELinux booleans on nodes using the Machine Config Operator (MCO). This procedure uses container_manage_cgroup as the example boolean. You can modify this value to whichever boolean you need. Prerequisites You have installed the OpenShift CLI (oc). Procedure Create a new YAML file with a MachineConfig object, displayed in the following example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux booleans Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_manage_cgroup=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service #... Create the new MachineConfig object by running the following command: USD oc create -f 99-worker-setsebool.yaml Note Applying any changes to the MachineConfig object causes all affected nodes to gracefully reboot after the change is applied. 6.3.5. Adding kernel arguments to nodes In some special cases, you might want to add kernel arguments to a set of nodes in your cluster. This should only be done with caution and clear understanding of the implications of the arguments you set. Warning Improper use of kernel arguments can result in your systems becoming unbootable. Examples of kernel arguments you could set include: nosmt : Disables symmetric multithreading (SMT) in the kernel. Multithreading allows multiple logical threads for each CPU. You could consider nosmt in multi-tenant environments to reduce risks from potential cross-thread attacks. By disabling SMT, you essentially choose security over performance. systemd.unified_cgroup_hierarchy : Enables Linux control group version 2 (cgroup v2). cgroup v2 is the version of the kernel control group and offers multiple improvements. Important cgroup v1 is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. enforcing=0 : Configures Security Enhanced Linux (SELinux) to run in permissive mode. In permissive mode, the system acts as if SELinux is enforcing the loaded security policy, including labeling objects and emitting access denial entries in the logs, but it does not actually deny any operations. While not supported for production systems, permissive mode can be helpful for debugging. Warning Disabling SELinux on RHCOS in production is not supported. Once SELinux has been disabled on a node, it must be re-provisioned before re-inclusion in a production cluster. See Kernel.org kernel parameters for a list and descriptions of kernel arguments. In the following procedure, you create a MachineConfig object that identifies: A set of machines to which you want to add the kernel argument. In this case, machines with a worker role. Kernel arguments that are appended to the end of the existing kernel arguments. A label that indicates where in the list of machine configs the change is applied. Prerequisites Have administrative privilege to a working OpenShift Container Platform cluster. Procedure List existing MachineConfig objects for your OpenShift Container Platform cluster to determine how to label your machine config: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Create a MachineConfig object file that identifies the kernel argument (for example, 05-worker-kernelarg-selinuxpermissive.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: kernelArguments: - enforcing=0 3 1 Applies the new kernel argument only to worker nodes. 2 Named to identify where it fits among the machine configs (05) and what it does (adds a kernel argument to configure SELinux permissive mode). 3 Identifies the exact kernel argument as enforcing=0 . Create the new machine config: USD oc create -f 05-worker-kernelarg-selinuxpermissive.yaml Check the machine configs to see that the new one was added: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Check the nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.30.3 ip-10-0-136-243.ec2.internal Ready master 34m v1.30.3 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.30.3 ip-10-0-142-249.ec2.internal Ready master 34m v1.30.3 ip-10-0-153-11.ec2.internal Ready worker 28m v1.30.3 ip-10-0-153-150.ec2.internal Ready master 34m v1.30.3 You can see that scheduling on each worker node is disabled as the change is being applied. Check that the kernel argument worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16... coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit You should see the enforcing=0 argument added to the other kernel arguments. 6.3.6. Enabling swap memory use on nodes Important Enabling swap memory use on nodes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note Enabling swap memory is only available for container-native virtualization (CNV) users or use cases. Warning Enabling swap memory can negatively impact workload performance and out-of-resource handling. Do not enable swap memory on control plane nodes. To enable swap memory, create a kubeletconfig custom resource (CR) to set the swapbehavior parameter. You can set limited or unlimited swap memory: Limited: Use the LimitedSwap value to limit how much swap memory workloads can use. Any workloads on the node that are not managed by OpenShift Container Platform can still use swap memory. The LimitedSwap behavior depends on whether the node is running with Linux control groups version 1 (cgroups v1) or version 2 (cgroup v2) : cgroup v1: OpenShift Container Platform workloads can use any combination of memory and swap, up to the pod's memory limit, if set. cgroup v2: OpenShift Container Platform workloads cannot use swap memory. Unlimited: Use the UnlimitedSwap value to allow workloads to use as much swap memory as they request, up to the system limit. Because the kubelet will not start in the presence of swap memory without this configuration, you must enable swap memory in OpenShift Container Platform before enabling swap memory on the nodes. If there is no swap memory present on a node, enabling swap memory in OpenShift Container Platform has no effect. Prerequisites You have a running OpenShift Container Platform cluster that uses version 4.10 or later. You are logged in to the cluster as a user with administrative privileges. You have enabled the TechPreviewNoUpgrade feature set on the cluster (see Nodes Working with clusters Enabling features using feature gates ). Note Enabling the TechPreviewNoUpgrade feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters. If cgroup v2 is enabled on a node, you must enable swap accounting on the node, by setting the swapaccount=1 kernel argument. Procedure Apply a custom label to the machine config pool where you want to allow swap memory. USD oc label machineconfigpool worker kubelet-swap=enabled Create a custom resource (CR) to enable and configure swap settings. apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: swap-config spec: machineConfigPoolSelector: matchLabels: kubelet-swap: enabled kubeletConfig: failSwapOn: false 1 memorySwap: swapBehavior: LimitedSwap 2 #... 1 Set to false to enable swap memory use on the associated nodes. Set to true to disable swap memory use. 2 Specify the swap memory behavior. If unspecified, the default is LimitedSwap . Enable swap memory on the machines. 6.3.7. Migrating control plane nodes from one RHOSP host to another manually If control plane machine sets are not enabled on your cluster, you can run a script that moves a control plane node from one Red Hat OpenStack Platform (RHOSP) node to another. Note Control plane machine sets are not enabled on clusters that run on user-provisioned infrastructure. For information about control plane machine sets, see "Managing control plane machines with control plane machine sets". Prerequisites The environment variable OS_CLOUD refers to a clouds entry that has administrative credentials in a clouds.yaml file. The environment variable KUBECONFIG refers to a configuration that contains administrative OpenShift Container Platform credentials. Procedure From a command line, run the following script: #!/usr/bin/env bash set -Eeuo pipefail if [ USD# -lt 1 ]; then echo "Usage: 'USD0 node_name'" exit 64 fi # Check for admin OpenStack credentials openstack server list --all-projects >/dev/null || { >&2 echo "The script needs OpenStack admin credentials. Exiting"; exit 77; } # Check for admin OpenShift credentials oc adm top node >/dev/null || { >&2 echo "The script needs OpenShift admin credentials. Exiting"; exit 77; } set -x declare -r node_name="USD1" declare server_id server_id="USD(openstack server list --all-projects -f value -c ID -c Name | grep "USDnode_name" | cut -d' ' -f1)" readonly server_id # Drain the node oc adm cordon "USDnode_name" oc adm drain "USDnode_name" --delete-emptydir-data --ignore-daemonsets --force # Power off the server oc debug "node/USD{node_name}" -- chroot /host shutdown -h 1 # Verify the server is shut off until openstack server show "USDserver_id" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done # Migrate the node openstack server migrate --wait "USDserver_id" # Resize the VM openstack server resize confirm "USDserver_id" # Wait for the resize confirm to finish until openstack server show "USDserver_id" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done # Restart the VM openstack server start "USDserver_id" # Wait for the node to show up as Ready: until oc get node "USDnode_name" | grep -q "^USD{node_name}[[:space:]]\+Ready"; do sleep 5; done # Uncordon the node oc adm uncordon "USDnode_name" # Wait for cluster operators to stabilize until oc get co -o go-template='statuses: {{ range .items }}{{ range .status.conditions }}{{ if eq .type "Degraded" }}{{ if ne .status "False" }}DEGRADED{{ end }}{{ else if eq .type "Progressing"}}{{ if ne .status "False" }}PROGRESSING{{ end }}{{ else if eq .type "Available"}}{{ if ne .status "True" }}NOTAVAILABLE{{ end }}{{ end }}{{ end }}{{ end }}' | grep -qv '\(DEGRADED\|PROGRESSING\|NOTAVAILABLE\)'; do sleep 5; done If the script completes, the control plane machine is migrated to a new RHOSP node. Additional resources Managing control plane machines with control plane machine sets 6.4. Adding worker nodes to an on-premise cluster For on-premise clusters, you can add worker nodes by using the OpenShift CLI ( oc ) to generate an ISO image, which can then be used to boot one or more nodes in your target cluster. This process can be used regardless of how you installed your cluster. You can add one or more nodes at a time while customizing each node with more complex configurations, such as static network configuration, or you can specify only the MAC address of each node. Any required configurations that are not specified during ISO generation are retrieved from the target cluster and applied to the new nodes. Note Machine or BareMetalHost resources are not automatically created after a node has been successfully added to the cluster. Preflight validation checks are also performed when booting the ISO image to inform you of failure-causing issues before you attempt to boot each node. 6.4.1. Supported platforms The following platforms are supported for this method of adding nodes: baremetal vsphere none external 6.4.2. Supported architectures The following architecture combinations have been validated to work when adding worker nodes using this process: amd64 worker nodes on amd64 or arm64 clusters arm64 worker nodes on amd64 or arm64 clusters s390x worker nodes on s390x clusters ppc64le worker nodes on ppc64le clusters 6.4.3. Adding nodes to your cluster You can add nodes with this method in the following two ways: Adding one or more nodes using a configuration file. You can specify configurations for one or more nodes in the nodes-config.yaml file before running the oc adm node-image create command. This is useful if you want to add more than one node at a time, or if you are specifying complex configurations. Adding a single node using only command flags. You can add a node by running the oc adm node-image create command with flags to specify your configurations. This is useful if you want to add only a single node at a time, and have only simple configurations to specify for that node. 6.4.3.1. Adding one or more nodes using a configuration file You can add one or more nodes to your cluster by using the nodes-config.yaml file to specify configurations for the new nodes. Prerequisites You have installed the OpenShift CLI ( oc ) You have an active connection to your target cluster You have a kubeconfig file available Procedure Create a new YAML file that contains configurations for the nodes you are adding and is named nodes-config.yaml . You must provide a MAC address for each new node. In the following example file, two new workers are described with an initial static network configuration: Example nodes-config.yaml file hosts: - hostname: extra-worker-1 rootDeviceHints: deviceName: /dev/sda interfaces: - macAddress: 00:00:00:00:00:00 name: eth0 networkConfig: interfaces: - name: eth0 type: ethernet state: up mac-address: 00:00:00:00:00:00 ipv4: enabled: true address: - ip: 192.168.122.2 prefix-length: 23 dhcp: false - hostname: extra-worker-2 rootDeviceHints: deviceName: /dev/sda interfaces: - macAddress: 00:00:00:00:00:02 name: eth0 networkConfig: interfaces: - name: eth0 type: ethernet state: up mac-address: 00:00:00:00:00:02 ipv4: enabled: true address: - ip: 192.168.122.3 prefix-length: 23 dhcp: false Generate the ISO image by running the following command: USD oc adm node-image create nodes-config.yaml Important In order for the create command to fetch a release image that matches the target cluster version, you must specify a valid pull secret. You can specify the pull secret either by using the --registry-config flag or by setting the REGISTRY_AUTH_FILE environment variable beforehand. Note If the directory of the nodes-config.yaml file is not specified by using the --dir flag, the tool looks for the file in the current directory. Verify that a new node.<arch>.iso file is present in the asset directory. The asset directory is your current directory, unless you specified a different one when creating the ISO image. Boot the selected node with the generated ISO image. Track progress of the node creation by running the following command: USD oc adm node-image monitor --ip-addresses <ip_addresses> where: <ip_addresses> Specifies a list of the IP addresses of the nodes that are being added. Note If reverse DNS entries are not available for your node, the oc adm node-image monitor command skips checks for pending certificate signing requests (CSRs). If these checks are skipped, you must manually check for CSRs by running the oc get csr command. Approve the CSRs by running the following command for each CSR: USD oc adm certificate approve <csr_name> 6.4.3.2. Adding a node with command flags You can add a single node to your cluster by using command flags to specify configurations for the new node. Prerequisites You have installed the OpenShift CLI ( oc ) You have an active connection to your target cluster You have a kubeconfig file available Procedure Generate the ISO image by running the following command. The MAC address must be specified using a command flag. See the "Cluster configuration reference" section for more flags that you can use with this command. USD oc adm node-image create --mac-address=<mac_address> where: <mac_address> Specifies the MAC address of the node that is being added. Important In order for the create command to fetch a release image that matches the target cluster version, you must specify a valid pull secret. You can specify the pull secret either by using the --registry-config flag or by setting the REGISTRY_AUTH_FILE environment variable beforehand. Tip To see additional flags that can be used to configure your node, run the following oc adm node-image create --help command. Verify that a new node.<arch>.iso file is present in the asset directory. The asset directory is your current directory, unless you specified a different one when creating the ISO image. Boot the node with the generated ISO image. Track progress of the node creation by running the following command: USD oc adm node-image monitor --ip-addresses <ip_address> where: <ip_address> Specifies a list of the IP addresses of the nodes that are being added. Note If reverse DNS entries are not available for your node, the oc adm node-image monitor command skips checks for pending certificate signing requests (CSRs). If these checks are skipped, you must manually check for CSRs by running the oc get csr command. Approve the pending CSRs by running the following command for each CSR: USD oc adm certificate approve <csr_name> 6.4.4. Cluster configuration reference When creating the ISO image, configurations are retrieved from the target cluster and are applied to the new nodes. Any configurations for your cluster are applied to the nodes unless you override the configurations in either the nodes-config.yaml file or any flags you add to the oc adm node-image create command. 6.4.4.1. YAML file parameters Configuration parameters that can be specified in the nodes-config.yaml file are described in the following table: Table 6.2. nodes-config.yaml parameters Parameter Description Values Host configuration. An array of host configuration objects. Hostname. Overrides the hostname obtained from either the Dynamic Host Configuration Protocol (DHCP) or a reverse DNS lookup. Each host must have a unique hostname supplied by one of these methods, although configuring a hostname through this parameter is optional. String. Provides a table of the name and MAC address mappings for the interfaces on the host. If a NetworkConfig section is provided in the nodes-config.yaml file, this table must be included and the values must match the mappings provided in the NetworkConfig section. An array of host configuration objects. The name of an interface on the host. String. The MAC address of an interface on the host. A MAC address such as the following example: 00-B0-D0-63-C2-26 . Enables provisioning of the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The node-adding tool examines the devices in the order it discovers them, and compares the discovered values with the hint values. It uses the first discovered device that matches the hint value. A dictionary of key-value pairs. For more information, see "Root device hints" in the "Setting up the environment for an OpenShift installation" page. The name of the device the RHCOS image is provisioned to. String. The host network definition. The configuration must match the Host Network Management API defined in the nmstate documentation . A dictionary of host network configuration objects. Optional. Specifies the architecture of the nodes you are adding. This parameter allows you to override the default value from the cluster when required. String. Optional. The file containing the SSH key to authenticate access to your cluster machines. String. 6.4.4.2. Command flag options You can use command flags with the oc adm node-image create command to configure the nodes you are creating. The following table describes command flags that are not limited to the single-node use case: Table 6.3. General command flags Flag Description Values --certificate-authority The path to a certificate authority bundle to use when communicating with the managed container image registries. If the --insecure flag is used, this flag is ignored. String --dir The path containing the configuration file, if provided. This path is also used to store the generated artifacts. String --insecure Allows push and pull operations to registries to be made over HTTP. Boolean -o , --output-name The name of the generated output image. String -a , --registry-config The path to your registry credentials. Alternatively, you can specify the REGISTRY_AUTH_FILE environment variable. The default paths are USD{XDG_RUNTIME_DIR}/containers/auth.json , /run/containers/USD{UID}/auth.json , USD{XDG_CONFIG_HOME}/containers/auth.json , USD{DOCKER_CONFIG} , ~/.docker/config.json , ~/.dockercfg. The order can be changed through the deprecated REGISTRY_AUTH_PREFERENCE environment variable to a "docker" value, in order to prioritize Docker credentials over Podman. String --skip-verification An option to skip verifying the integrity of the retrieved content. This is not recommended, but might be necessary when importing images from older image registries. Bypass verification only if the registry is known to be trustworthy. Boolean The following table describes command flags that can be used only when creating a single node: Table 6.4. Single-node only command flags Flag Description Values -c , --cpu-architecture The CPU architecture to be used to install the node. This flag can be used to create only a single node, and the --mac-address flag must be defined. String --hostname The hostname to be set for the node. This flag can be used to create only a single node, and the --mac-address flag must be defined. String -m , --mac-address The MAC address used to identify the host to apply configurations to. This flag can be used to create only a single node, and the --mac-address flag must be defined. String --network-config-path The path to a YAML file containing NMState configurations to be applied to the node. This flag can be used to create only a single node, and the --mac-address flag must be defined. String --root-device-hint A hint for specifying the storage location for the image root filesystem. The accepted format is <hint_name>:<value> . This flag can be used to create only a single node, and the --mac-address flag must be defined. String -k , --ssh-key-path The path to the SSH key used to access the node. This flag can be used to create only a single node, and the --mac-address flag must be defined. String Additional resources Root device hints 6.5. Managing the maximum number of pods per node In OpenShift Container Platform, you can configure the number of pods that can run on a node based on the number of processor cores on the node, a hard limit or both. If you use both options, the lower of the two limits the number of pods on a node. When both options are in use, the lower of the two values limits the number of pods on a node. Exceeding these values can result in: Increased CPU utilization. Slow pod scheduling. Potential out-of-memory scenarios, depending on the amount of memory in the node. Exhausting the pool of IP addresses. Resource overcommitting, leading to poor user application performance. Important In Kubernetes, a pod that is holding a single container actually uses two containers. The second container is used to set up networking prior to the actual container starting. Therefore, a system running 10 pods will actually have 20 containers running. Note Disk IOPS throttling from the cloud provider might have an impact on CRI-O and kubelet. They might get overloaded when there are large number of I/O intensive pods running on the nodes. It is recommended that you monitor the disk I/O on the nodes and use volumes with sufficient throughput for the workload. The podsPerCore parameter sets the number of pods the node can run based on the number of processor cores on the node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40 . kubeletConfig: podsPerCore: 10 Setting podsPerCore to 0 disables this limit. The default is 0 . The value of the podsPerCore parameter cannot exceed the value of the maxPods parameter. The maxPods parameter sets the number of pods the node can run to a fixed value, regardless of the properties of the node. kubeletConfig: maxPods: 250 6.5.1. Configuring the maximum number of pods per node Two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore and maxPods . If you use both options, the lower of the two limits the number of pods on a node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a max-pods CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #... 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Specify the number of pods the node can run based on the number of processor cores on the node. 4 Specify the number of pods the node can run to a fixed value, regardless of the properties of the node. Note Setting podsPerCore to 0 disables this limit. In the above example, the default value for podsPerCore is 10 and the default value for maxPods is 250 . This means that unless the node has 25 cores or more, by default, podsPerCore will be the limiting factor. Run the following command to create the CR: USD oc create -f <file_name>.yaml Verification List the MachineConfigPool CRDs to see if the change is applied. The UPDATING column reports True if the change is picked up by the Machine Config Controller: USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False Once the change is complete, the UPDATED column reports True . USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False 6.6. Using the Node Tuning Operator Learn about the Node Tuning Operator and how you can use it to manage node-level tuning by orchestrating the tuned daemon. Purpose The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon and achieves low latency performance by using the Performance Profile controller. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. The Operator manages the containerized TuneD daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Node-level settings applied by the containerized TuneD daemon are rolled back on an event that triggers a profile change or when the containerized TuneD daemon is terminated gracefully by receiving and handling a termination signal. The Node Tuning Operator uses the Performance Profile controller to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. The cluster administrator configures a performance profile to define node-level settings such as the following: Updating the kernel to kernel-rt. Choosing CPUs for housekeeping. Choosing CPUs for running workloads. The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. 6.6.1. Accessing an example Node Tuning Operator specification Use this process to access an example Node Tuning Operator specification. Procedure Run the following command to access an example Node Tuning Operator specification: oc get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator The default CR is meant for delivering standard node-level tuning for the OpenShift Container Platform platform and it can only be modified to set the Operator Management state. Any other custom changes to the default CR will be overwritten by the Operator. For custom tuning, create your own Tuned CRs. Newly created CRs will be combined with the default CR and custom tuning applied to OpenShift Container Platform nodes based on node or pod labels and profile priorities. Warning While in certain situations the support for pod labels can be a convenient way of automatically delivering required tuning, this practice is discouraged and strongly advised against, especially in large-scale clusters. The default Tuned CR ships without pod label matching. If a custom profile is created with pod label matching, then the functionality will be enabled at that time. The pod label functionality will be deprecated in future versions of the Node Tuning Operator. 6.6.2. Custom tuning specification The custom resource (CR) for the Operator has two major sections. The first section, profile: , is a list of TuneD profiles and their names. The second, recommend: , defines the profile selection logic. Multiple custom tuning specifications can co-exist as multiple CRs in the Operator's namespace. The existence of new CRs or the deletion of old CRs is detected by the Operator. All existing custom tuning specifications are merged and appropriate objects for the containerized TuneD daemons are updated. Management state The Operator Management state is set by adjusting the default Tuned CR. By default, the Operator is in the Managed state and the spec.managementState field is not present in the default Tuned CR. Valid values for the Operator Management state are as follows: Managed: the Operator will update its operands as configuration resources are updated Unmanaged: the Operator will ignore changes to the configuration resources Removed: the Operator will remove its operands and resources the Operator provisioned Profile data The profile: section lists TuneD profiles and their names. profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD # ... - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings Recommended profiles The profile: selection logic is defined by the recommend: section of the CR. The recommend: section is a list of items to recommend the profiles based on a selection criteria. recommend: <recommend-item-1> # ... <recommend-item-n> The individual items of the list: - machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9 1 Optional. 2 A dictionary of key/value MachineConfig labels. The keys must be unique. 3 If omitted, profile match is assumed unless a profile with a higher priority matches first or machineConfigLabels is set. 4 An optional list. 5 Profile ordering priority. Lower numbers mean higher priority ( 0 is the highest priority). 6 A TuneD profile to apply on a match. For example tuned_profile_1 . 7 Optional operand configuration. 8 Turn debugging on or off for the TuneD daemon. Options are true for on or false for off. The default is false . 9 Turn reapply_sysctl functionality on or off for the TuneD daemon. Options are true for on and false for off. <match> is an optional list recursively defined as follows: - label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4 1 Node or pod label name. 2 Optional node or pod label value. If omitted, the presence of <label_name> is enough to match. 3 Optional object type ( node or pod ). If omitted, node is assumed. 4 An optional <match> list. If <match> is not omitted, all nested <match> sections must also evaluate to true . Otherwise, false is assumed and the profile with the respective <match> section will not be applied or recommended. Therefore, the nesting (child <match> sections) works as logical AND operator. Conversely, if any item of the <match> list matches, the entire <match> list evaluates to true . Therefore, the list acts as logical OR operator. If machineConfigLabels is defined, machine config pool based matching is turned on for the given recommend: list item. <mcLabels> specifies the labels for a machine config. The machine config is created automatically to apply host settings, such as kernel boot parameters, for the profile <tuned_profile_name> . This involves finding all machine config pools with machine config selector matching <mcLabels> and setting the profile <tuned_profile_name> on all nodes that are assigned the found machine config pools. To target nodes that have both master and worker roles, you must use the master role. The list items match and machineConfigLabels are connected by the logical OR operator. The match item is evaluated first in a short-circuit manner. Therefore, if it evaluates to true , the machineConfigLabels item is not considered. Important When using machine config pool based matching, it is advised to group nodes with the same hardware configuration into the same machine config pool. Not following this practice might result in TuneD operands calculating conflicting kernel parameters for two or more nodes sharing the same machine config pool. Example: Node or pod label based matching - match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node The CR above is translated for the containerized TuneD daemon into its recommend.conf file based on the profile priorities. The profile with the highest priority ( 10 ) is openshift-control-plane-es and, therefore, it is considered first. The containerized TuneD daemon running on a given node looks to see if there is a pod running on the same node with the tuned.openshift.io/elasticsearch label set. If not, the entire <match> section evaluates as false . If there is such a pod with the label, in order for the <match> section to evaluate to true , the node label also needs to be node-role.kubernetes.io/master or node-role.kubernetes.io/infra . If the labels for the profile with priority 10 matched, openshift-control-plane-es profile is applied and no other profile is considered. If the node/pod label combination did not match, the second highest priority profile ( openshift-control-plane ) is considered. This profile is applied if the containerized TuneD pod runs on a node with labels node-role.kubernetes.io/master or node-role.kubernetes.io/infra . Finally, the profile openshift-node has the lowest priority of 30 . It lacks the <match> section and, therefore, will always match. It acts as a profile catch-all to set openshift-node profile, if no other profile with higher priority matches on a given node. Example: Machine config pool based matching apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: "worker-custom" priority: 20 profile: openshift-node-custom To minimize node reboots, label the target nodes with a label the machine config pool's node selector will match, then create the Tuned CR above and finally create the custom machine config pool itself. Cloud provider-specific TuneD profiles With this functionality, all Cloud provider-specific nodes can conveniently be assigned a TuneD profile specifically tailored to a given Cloud provider on a OpenShift Container Platform cluster. This can be accomplished without adding additional node labels or grouping nodes into machine config pools. This functionality takes advantage of spec.providerID node object values in the form of <cloud-provider>://<cloud-provider-specific-id> and writes the file /var/lib/ocp-tuned/provider with the value <cloud-provider> in NTO operand containers. The content of this file is then used by TuneD to load provider-<cloud-provider> profile if such profile exists. The openshift profile that both openshift-control-plane and openshift-node profiles inherit settings from is now updated to use this functionality through the use of conditional profile loading. Neither NTO nor TuneD currently include any Cloud provider-specific profiles. However, it is possible to create a custom profile provider-<cloud-provider> that will be applied to all Cloud provider-specific cluster nodes. Example GCE Cloud provider profile apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce Note Due to profile inheritance, any setting specified in the provider-<cloud-provider> profile will be overwritten by the openshift profile and its child profiles. 6.6.3. Default profiles set on a cluster The following are the default profiles set on a cluster. apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/ocp-tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40 Starting with OpenShift Container Platform 4.9, all OpenShift TuneD profiles are shipped with the TuneD package. You can use the oc exec command to view the contents of these profiles: USD oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \; 6.6.4. Supported TuneD daemon plugins Excluding the [main] section, the following TuneD plugins are supported when using custom profiles defined in the profile: section of the Tuned CR: audio cpu disk eeepc_she modules mounts net scheduler scsi_host selinux sysctl sysfs usb video vm bootloader There is some dynamic tuning functionality provided by some of these plugins that is not supported. The following TuneD plugins are currently not supported: script systemd Note The TuneD bootloader plugin only supports Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. Additional resources Available TuneD Plugins Getting Started with TuneD 6.7. Remediating, fencing, and maintaining nodes When node-level failures occur, such as the kernel hangs or network interface controllers (NICs) fail, the work required from the cluster does not decrease, and workloads from affected nodes need to be restarted somewhere. Failures affecting these workloads risk data loss, corruption, or both. It is important to isolate the node, known as fencing , before initiating recovery of the workload, known as remediation , and recovery of the node. For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation. 6.8. Understanding node rebooting To reboot a node without causing an outage for applications running on the platform, it is important to first evacuate the pods. For pods that are made highly available by the routing tier, nothing else needs to be done. For other pods needing storage, typically databases, it is critical to ensure that they can remain in operation with one pod temporarily going offline. While implementing resiliency for stateful pods is different for each application, in all cases it is important to configure the scheduler to use node anti-affinity to ensure that the pods are properly spread across available nodes. Another challenge is how to handle nodes that are running critical infrastructure such as the router or the registry. The same node evacuation process applies, though it is important to understand certain edge cases. 6.8.1. About rebooting nodes running critical infrastructure When rebooting nodes that host critical OpenShift Container Platform infrastructure components, such as router pods, registry pods, and monitoring pods, ensure that there are at least three nodes available to run these components. The following scenario demonstrates how service interruptions can occur with applications running on OpenShift Container Platform when only two nodes are available: Node A is marked unschedulable and all pods are evacuated. The registry pod running on that node is now redeployed on node B. Node B is now running both registry pods. Node B is now marked unschedulable and is evacuated. The service exposing the two pod endpoints on node B loses all endpoints, for a brief period of time, until they are redeployed to node A. When using three nodes for infrastructure components, this process does not result in a service disruption. However, due to pod scheduling, the last node that is evacuated and brought back into rotation does not have a registry pod. One of the other nodes has two registry pods. To schedule the third registry pod on the last node, use pod anti-affinity to prevent the scheduler from locating two registry pods on the same node. Additional information For more information on pod anti-affinity, see Placing pods relative to other pods using affinity and anti-affinity rules . 6.8.2. Rebooting a node using pod anti-affinity Pod anti-affinity is slightly different than node anti-affinity. Node anti-affinity can be violated if there are no other suitable locations to deploy a pod. Pod anti-affinity can be set to either required or preferred. With this in place, if only two infrastructure nodes are available and one is rebooted, the container image registry pod is prevented from running on the other node. oc get pods reports the pod as unready until a suitable node is available. Once a node is available and all pods are back in ready state, the node can be restarted. Procedure To reboot a node using pod anti-affinity: Edit the node specification to configure pod anti-affinity: apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: registry 4 operator: In 5 values: - default topologyKey: kubernetes.io/hostname #... 1 Stanza to configure pod anti-affinity. 2 Defines a preferred rule. 3 Specifies a weight for a preferred rule. The node with the highest weight is preferred. 4 Description of the pod label that determines when the anti-affinity rule applies. Specify a key and value for the label. 5 The operator represents the relationship between the label on the existing pod and the set of values in the matchExpression parameters in the specification for the new pod. Can be In , NotIn , Exists , or DoesNotExist . This example assumes the container image registry pod has a label of registry=default . Pod anti-affinity can use any Kubernetes match expression. Enable the MatchInterPodAffinity scheduler predicate in the scheduling policy file. Perform a graceful restart of the node. 6.8.3. Understanding how to reboot nodes running routers In most cases, a pod running an OpenShift Container Platform router exposes a host port. The PodFitsPorts scheduler predicate ensures that no router pods using the same port can run on the same node, and pod anti-affinity is achieved. If the routers are relying on IP failover for high availability, there is nothing else that is needed. For router pods relying on an external service such as AWS Elastic Load Balancing for high availability, it is that service's responsibility to react to router pod restarts. In rare cases, a router pod may not have a host port configured. In those cases, it is important to follow the recommended restart process for infrastructure nodes. 6.8.4. Rebooting a node gracefully Before rebooting a node, it is recommended to backup etcd data to avoid any data loss on the node. Note For single-node OpenShift clusters that require users to perform the oc login command rather than having the certificates in kubeconfig file to manage the cluster, the oc adm commands might not be available after cordoning and draining the node. This is because the openshift-oauth-apiserver pod is not running due to the cordon. You can use SSH to access the nodes as indicated in the following procedure. In a single-node OpenShift cluster, pods cannot be rescheduled when cordoning and draining. However, doing so gives the pods, especially your workload pods, time to properly stop and release associated resources. Procedure To perform a graceful restart of a node: Mark the node as unschedulable: USD oc adm cordon <node1> Drain the node to remove all the running pods: USD oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force You might receive errors that pods associated with custom pod disruption budgets (PDB) cannot be evicted. Example error error when evicting pods/"rails-postgresql-example-1-72v2w" -n "rails" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. In this case, run the drain command again, adding the disable-eviction flag, which bypasses the PDB checks: USD oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction Access the node in debug mode: USD oc debug node/<node1> Change your root directory to /host : USD chroot /host Restart the node: USD systemctl reboot In a moment, the node enters the NotReady state. Note With some single-node OpenShift clusters, the oc commands might not be available after you cordon and drain the node because the openshift-oauth-apiserver pod is not running. You can use SSH to connect to the node and perform the reboot. USD ssh core@<master-node>.<cluster_name>.<base_domain> USD sudo systemctl reboot After the reboot is complete, mark the node as schedulable by running the following command: USD oc adm uncordon <node1> Note With some single-node OpenShift clusters, the oc commands might not be available after you cordon and drain the node because the openshift-oauth-apiserver pod is not running. You can use SSH to connect to the node and uncordon it. USD ssh core@<target_node> USD sudo oc adm uncordon <node> --kubeconfig /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig Verify that the node is ready: USD oc get node <node1> Example output NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8 Additional information For information on etcd data backup, see Backing up etcd data . 6.9. Freeing node resources using garbage collection As an administrator, you can use OpenShift Container Platform to ensure that your nodes are running efficiently by freeing up resources through garbage collection. The OpenShift Container Platform node performs two types of garbage collection: Container garbage collection: Removes terminated containers. Image garbage collection: Removes images not referenced by any running pods. 6.9.1. Understanding how terminated containers are removed through garbage collection Container garbage collection removes terminated containers by using eviction thresholds. When eviction thresholds are set for garbage collection, the node tries to keep any container for any pod accessible from the API. If the pod has been deleted, the containers will be as well. Containers are preserved as long the pod is not deleted and the eviction threshold is not reached. If the node is under disk pressure, it will remove containers and their logs will no longer be accessible using oc logs . eviction-soft - A soft eviction threshold pairs an eviction threshold with a required administrator-specified grace period. eviction-hard - A hard eviction threshold has no grace period, and if observed, OpenShift Container Platform takes immediate action. The following table lists the eviction thresholds: Table 6.5. Variables for configuring container garbage collection Node condition Eviction signal Description MemoryPressure memory.available The available memory on the node. DiskPressure nodefs.available nodefs.inodesFree imagefs.available imagefs.inodesFree The available disk space or inodes on the node root file system, nodefs , or image file system, imagefs . Note For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. If a node is oscillating above and below a soft eviction threshold, but not exceeding its associated grace period, the corresponding node would constantly oscillate between true and false . As a consequence, the scheduler could make poor scheduling decisions. To protect against this oscillation, use the evictionpressure-transition-period flag to control how long OpenShift Container Platform must wait before transitioning out of a pressure condition. OpenShift Container Platform will not set an eviction threshold as being met for the specified pressure condition for the period specified before toggling the condition back to false. Note Setting the evictionPressureTransitionPeriod parameter to 0 configures the default value of 5 minutes. You cannot set an eviction pressure transition period to zero seconds. 6.9.2. Understanding how images are removed through garbage collection Image garbage collection removes images that are not referenced by any running pods. OpenShift Container Platform determines which images to remove from a node based on the disk usage that is reported by cAdvisor . The policy for image garbage collection is based on two conditions: The percent of disk usage (expressed as an integer) which triggers image garbage collection. The default is 85 . The percent of disk usage (expressed as an integer) to which image garbage collection attempts to free. Default is 80 . For image garbage collection, you can modify any of the following variables using a custom resource. Table 6.6. Variables for configuring image garbage collection Setting Description imageMinimumGCAge The minimum age for an unused image before the image is removed by garbage collection. The default is 2m . imageGCHighThresholdPercent The percent of disk usage, expressed as an integer, which triggers image garbage collection. The default is 85 . This value must be greater than the imageGCLowThresholdPercent value. imageGCLowThresholdPercent The percent of disk usage, expressed as an integer, to which image garbage collection attempts to free. The default is 80 . This value must be less than the imageGCHighThresholdPercent value. Two lists of images are retrieved in each garbage collector run: A list of images currently running in at least one pod. A list of images available on a host. As new containers are run, new images appear. All images are marked with a time stamp. If the image is running (the first list above) or is newly detected (the second list above), it is marked with the current time. The remaining images are already marked from the spins. All images are then sorted by the time stamp. Once the collection starts, the oldest images get deleted first until the stopping criterion is met. 6.9.3. Configuring garbage collection for containers and images As an administrator, you can configure how OpenShift Container Platform performs garbage collection by creating a kubeletConfig object for each machine config pool. Note OpenShift Container Platform supports only one kubeletConfig object for each machine config pool. You can configure any combination of the following: Soft eviction for containers Hard eviction for containers Eviction for images Container garbage collection removes terminated containers. Image garbage collection removes images that are not referenced by any running pods. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Important If there is one file system, or if /var/lib/kubelet and /var/lib/containers/ are in the same file system, the settings with the highest values trigger evictions, as those are met first. The file system triggers the eviction. Sample configuration for a container garbage collection CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: evictionSoft: 3 memory.available: "500Mi" 4 nodefs.available: "10%" nodefs.inodesFree: "5%" imagefs.available: "15%" imagefs.inodesFree: "10%" evictionSoftGracePeriod: 5 memory.available: "1m30s" nodefs.available: "1m30s" nodefs.inodesFree: "1m30s" imagefs.available: "1m30s" imagefs.inodesFree: "1m30s" evictionHard: 6 memory.available: "200Mi" nodefs.available: "5%" nodefs.inodesFree: "4%" imagefs.available: "10%" imagefs.inodesFree: "5%" evictionPressureTransitionPeriod: 3m 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #... 1 Name for the object. 2 Specify the label from the machine config pool. 3 For container garbage collection: Type of eviction: evictionSoft or evictionHard . 4 For container garbage collection: Eviction thresholds based on a specific eviction trigger signal. 5 For container garbage collection: Grace periods for the soft eviction. This parameter does not apply to eviction-hard . 6 For container garbage collection: Eviction thresholds based on a specific eviction trigger signal. For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. 7 For container garbage collection: The duration to wait before transitioning out of an eviction pressure condition. Setting the evictionPressureTransitionPeriod parameter to 0 configures the default value of 5 minutes. 8 For image garbage collection: The minimum age for an unused image before the image is removed by garbage collection. 9 For image garbage collection: Image garbage collection is triggered at the specified percent of disk usage (expressed as an integer). This value must be greater than the imageGCLowThresholdPercent value. 10 For image garbage collection: Image garbage collection attempts to free resources to the specified percent of disk usage (expressed as an integer). This value must be less than the imageGCHighThresholdPercent value. Run the following command to create the CR: USD oc create -f <file_name>.yaml For example: USD oc create -f gc-container.yaml Example output kubeletconfig.machineconfiguration.openshift.io/gc-container created Verification Verify that garbage collection is active by entering the following command. The Machine Config Pool you specified in the custom resource appears with UPDATING as 'true` until the change is fully implemented: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True 6.10. Allocating resources for nodes in an OpenShift Container Platform cluster To provide more reliable scheduling and minimize node resource overcommitment, reserve a portion of the CPU and memory resources for use by the underlying node components, such as kubelet and kube-proxy , and the remaining system components, such as sshd and NetworkManager . By specifying the resources to reserve, you provide the scheduler with more information about the remaining CPU and memory resources that a node has available for use by pods. You can allow OpenShift Container Platform to automatically determine the optimal system-reserved CPU and memory resources for your nodes or you can manually determine and set the best resources for your nodes. Important To manually set resource values, you must use a kubelet config CR. You cannot use a machine config CR. 6.10.1. Understanding how to allocate resources for nodes CPU and memory resources reserved for node components in OpenShift Container Platform are based on two node settings: Setting Description kube-reserved This setting is not used with OpenShift Container Platform. Add the CPU and memory resources that you planned to reserve to the system-reserved setting. system-reserved This setting identifies the resources to reserve for the node components and system components, such as CRI-O and Kubelet. The default settings depend on the OpenShift Container Platform and Machine Config Operator versions. Confirm the default systemReserved parameter on the machine-config-operator repository. If a flag is not set, the defaults are used. If none of the flags are set, the allocated resource is set to the node's capacity as it was before the introduction of allocatable resources. Note Any CPUs specifically reserved using the reservedSystemCPUs parameter are not available for allocation using kube-reserved or system-reserved . 6.10.1.1. How OpenShift Container Platform computes allocated resources An allocated amount of a resource is computed based on the following formula: Note The withholding of Hard-Eviction-Thresholds from Allocatable improves system reliability because the value for Allocatable is enforced for pods at the node level. If Allocatable is negative, it is set to 0 . Each node reports the system resources that are used by the container runtime and kubelet. To simplify configuring the system-reserved parameter, view the resource use for the node by using the node summary API. The node summary is available at /api/v1/nodes/<node>/proxy/stats/summary . 6.10.1.2. How nodes enforce resource constraints The node is able to limit the total amount of resources that pods can consume based on the configured allocatable value. This feature significantly improves the reliability of the node by preventing pods from using CPU and memory resources that are needed by system services such as the container runtime and node agent. To improve node reliability, administrators should reserve resources based on a target for resource use. The node enforces resource constraints by using a new cgroup hierarchy that enforces quality of service. All pods are launched in a dedicated cgroup hierarchy that is separate from system daemons. Administrators should treat system daemons similar to pods that have a guaranteed quality of service. System daemons can burst within their bounding control groups and this behavior must be managed as part of cluster deployments. Reserve CPU and memory resources for system daemons by specifying the amount of CPU and memory resources in system-reserved . Enforcing system-reserved limits can prevent critical system services from receiving CPU and memory resources. As a result, a critical system service can be ended by the out-of-memory killer. The recommendation is to enforce system-reserved only if you have profiled the nodes exhaustively to determine precise estimates and you are confident that critical system services can recover if any process in that group is ended by the out-of-memory killer. 6.10.1.3. Understanding Eviction Thresholds If a node is under memory pressure, it can impact the entire node and all pods running on the node. For example, a system daemon that uses more than its reserved amount of memory can trigger an out-of-memory event. To avoid or reduce the probability of system out-of-memory events, the node provides out-of-resource handling. You can reserve some memory using the --eviction-hard flag. The node attempts to evict pods whenever memory availability on the node drops below the absolute value or percentage. If system daemons do not exist on a node, pods are limited to the memory capacity - eviction-hard . For this reason, resources set aside as a buffer for eviction before reaching out of memory conditions are not available for pods. The following is an example to illustrate the impact of node allocatable for memory: Node capacity is 32Gi --system-reserved is 3Gi --eviction-hard is set to 100Mi . For this node, the effective node allocatable value is 28.9Gi . If the node and system components use all their reservation, the memory available for pods is 28.9Gi , and kubelet evicts pods when it exceeds this threshold. If you enforce node allocatable, 28.9Gi , with top-level cgroups, then pods can never exceed 28.9Gi . Evictions are not performed unless system daemons consume more than 3.1Gi of memory. If system daemons do not use up all their reservation, with the above example, pods would face memcg OOM kills from their bounding cgroup before node evictions kick in. To better enforce QoS under this situation, the node applies the hard eviction thresholds to the top-level cgroup for all pods to be Node Allocatable + Eviction Hard Thresholds . If system daemons do not use up all their reservation, the node will evict pods whenever they consume more than 28.9Gi of memory. If eviction does not occur in time, a pod will be OOM killed if pods consume 29Gi of memory. 6.10.1.4. How the scheduler determines resource availability The scheduler uses the value of node.Status.Allocatable instead of node.Status.Capacity to decide if a node will become a candidate for pod scheduling. By default, the node will report its machine capacity as fully schedulable by the cluster. 6.10.2. Automatically allocating resources for nodes OpenShift Container Platform can automatically determine the optimal system-reserved CPU and memory resources for nodes associated with a specific machine config pool and update the nodes with those values when the nodes start. By default, the system-reserved CPU is 500m and system-reserved memory is 1Gi . To automatically determine and allocate the system-reserved resources on nodes, create a KubeletConfig custom resource (CR) to set the autoSizingReserved: true parameter. A script on each node calculates the optimal values for the respective reserved resources based on the installed CPU and memory capacity on each node. The script takes into account that increased capacity requires a corresponding increase in the reserved resources. Automatically determining the optimal system-reserved settings ensures that your cluster is running efficiently and prevents node failure due to resource starvation of system components, such as CRI-O and kubelet, without your needing to manually calculate and update the values. This feature is disabled by default. Prerequisites Obtain the label associated with the static MachineConfigPool object for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels . Tip If an appropriate label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change: Sample configuration for a resource allocation CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: dynamic-node 1 spec: autoSizingReserved: true 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 3 #... 1 Assign a name to CR. 2 Add the autoSizingReserved parameter set to true to allow OpenShift Container Platform to automatically determine and allocate the system-reserved resources on the nodes associated with the specified label. To disable automatic allocation on those nodes, set this parameter to false . 3 Specify the label from the machine config pool that you configured in the "Prerequisites" section. You can choose any desired labels for the machine config pool, such as custom-kubelet: small-pods , or the default label, pools.operator.machineconfiguration.openshift.io/worker: "" . The example enables automatic resource allocation on all worker nodes. OpenShift Container Platform drains the nodes, applies the kubelet config, and restarts the nodes. Create the CR by entering the following command: USD oc create -f <file_name>.yaml Verification Log in to a node you configured by entering the following command: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: # chroot /host View the /etc/node-sizing.env file: Example output SYSTEM_RESERVED_MEMORY=3Gi SYSTEM_RESERVED_CPU=0.08 The kubelet uses the system-reserved values in the /etc/node-sizing.env file. In the example, the worker nodes are allocated 0.08 CPU and 3 Gi of memory. It can take several minutes for the optimal values to appear. 6.10.3. Manually allocating resources for nodes OpenShift Container Platform supports the CPU and memory resource types for allocation. The ephemeral-resource resource type is also supported. For the cpu type, you specify the resource quantity in units of cores, such as 200m , 0.5 , or 1 . For memory and ephemeral-storage , you specify the resource quantity in units of bytes, such as 200Ki , 50Mi , or 5Gi . By default, the system-reserved CPU is 500m and system-reserved memory is 1Gi . As an administrator, you can set these values by using a kubelet config custom resource (CR) through a set of <resource_type>=<resource_quantity> pairs (e.g., cpu=200m,memory=512Mi ). Important You must use a kubelet config CR to manually set resource values. You cannot use a machine config CR. For details on the recommended system-reserved values, refer to the recommended system-reserved values . Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a resource allocation CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-allocatable 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: systemReserved: 3 cpu: 1000m memory: 1Gi #... 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Specify the resources to reserve for the node components and system components. Run the following command to create the CR: USD oc create -f <file_name>.yaml 6.11. Allocating specific CPUs for nodes in a cluster When using the static CPU Manager policy , you can reserve specific CPUs for use by specific nodes in your cluster. For example, on a system with 24 CPUs, you could reserve CPUs numbered 0 - 3 for the control plane allowing the compute nodes to use CPUs 4 - 23. 6.11.1. Reserving CPUs for nodes To explicitly define a list of CPUs that are reserved for specific nodes, create a KubeletConfig custom resource (CR) to define the reservedSystemCPUs parameter. This list supersedes the CPUs that might be reserved using the systemReserved parameter. Procedure Obtain the label associated with the machine config pool (MCP) for the type of node you want to configure: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= pools.operator.machineconfiguration.openshift.io/worker= 1 Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool #... 1 Get the MCP label. Create a YAML file for the KubeletConfig CR: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-reserved-cpus 1 spec: kubeletConfig: reservedSystemCPUs: "0,1,2,3" 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 3 #... 1 Specify a name for the CR. 2 Specify the core IDs of the CPUs you want to reserve for the nodes associated with the MCP. 3 Specify the label from the MCP. Create the CR object: USD oc create -f <file_name>.yaml Additional resources For more information on the systemReserved parameter, see Allocating resources for nodes in an OpenShift Container Platform cluster . 6.12. Enabling TLS security profiles for the kubelet You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by the kubelet when it is acting as an HTTP server. The kubelet uses its HTTP/GRPC server to communicate with the Kubernetes API server, which sends commands to pods, gathers logs, and run exec commands on pods through the kubelet. A TLS security profile defines the TLS ciphers that the Kubernetes API server must use when connecting with the kubelet to protect communication between the kubelet and the Kubernetes API server. Note By default, when the kubelet acts as a client with the Kubernetes API server, it automatically negotiates the TLS parameters with the API server. 6.12.1. Understanding TLS security profiles You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by various OpenShift Container Platform components. The OpenShift Container Platform TLS security profiles are based on Mozilla recommended configurations . You can specify one of the following TLS security profiles for each component: Table 6.7. TLS security profiles Profile Description Old This profile is intended for use with legacy clients or libraries. The profile is based on the Old backward compatibility recommended configuration. The Old profile requires a minimum TLS version of 1.0. Note For the Ingress Controller, the minimum TLS version is converted from 1.0 to 1.1. Intermediate This profile is the recommended configuration for the majority of clients. It is the default TLS security profile for the Ingress Controller, kubelet, and control plane. The profile is based on the Intermediate compatibility recommended configuration. The Intermediate profile requires a minimum TLS version of 1.2. Modern This profile is intended for use with modern clients that have no need for backwards compatibility. This profile is based on the Modern compatibility recommended configuration. The Modern profile requires a minimum TLS version of 1.3. Custom This profile allows you to define the TLS version and ciphers to use. Warning Use caution when using a Custom profile, because invalid configurations can cause problems. Note When using one of the predefined profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 might cause a new profile configuration to be applied, resulting in a rollout. 6.12.2. Configuring the TLS security profile for the kubelet To configure a TLS security profile for the kubelet when it is acting as an HTTP server, create a KubeletConfig custom resource (CR) to specify a predefined or custom TLS security profile for specific nodes. If a TLS security profile is not configured, the default TLS security profile is Intermediate . Sample KubeletConfig CR that configures the Old TLS security profile on worker nodes apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig # ... spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" # ... You can see the ciphers and the minimum TLS version of the configured TLS security profile in the kubelet.conf file on a configured node. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Procedure Create a KubeletConfig CR to configure the TLS security profile: Sample KubeletConfig CR for a Custom profile apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 4 #... 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. 4 Optional: Specify the machine config pool label for the nodes you want to apply the TLS security profile. Create the KubeletConfig object: USD oc create -f <filename> Depending on the number of worker nodes in the cluster, wait for the configured nodes to be rebooted one by one. Verification To verify that the profile is set, perform the following steps after the nodes are in the Ready state: Start a debug session for a configured node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: sh-4.4# chroot /host View the kubelet.conf file: sh-4.4# cat /etc/kubernetes/kubelet.conf Example output "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", #... "tlsCipherSuites": [ "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256" ], "tlsMinVersion": "VersionTLS12", #... 6.13. Creating infrastructure nodes Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' You can use infrastructure machine sets to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment. In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. This configuration requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 6.13.1. OpenShift Container Platform infrastructure components Each self-managed Red Hat OpenShift subscription includes entitlements for OpenShift Container Platform and other OpenShift-related components. These entitlements are included for running OpenShift Container Platform control plane and infrastructure workloads and do not need to be accounted for during sizing. To qualify as an infrastructure node and use the included entitlement, only components that are supporting the cluster, and not part of an end-user application, can run on those instances. Examples include the following components: Kubernetes and OpenShift Container Platform control plane services The default router The integrated container image registry The HAProxy-based Ingress Controller The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects Cluster aggregated logging Red Hat Quay Red Hat OpenShift Data Foundation Red Hat Advanced Cluster Management for Kubernetes Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift GitOps Red Hat OpenShift Pipelines Red Hat OpenShift Service Mesh Any node that runs any other container, pod, or component is a worker node that your subscription must cover. For information about infrastructure nodes and which components can run on infrastructure nodes, see the "Red Hat OpenShift control plane and infrastructure nodes" section in the OpenShift sizing and subscription guide for enterprise Kubernetes document. To create an infrastructure node, you can use a machine set , label the node , or use a machine config pool . 6.13.1.1. Creating an infrastructure node Important See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes are managed by the machine API. Requirements of the cluster dictate that infrastructure, also called infra nodes, be provisioned. The installer only provides provisions for control plane and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called app , nodes through labeling. Procedure Add a label to the worker node that you want to act as application node: USD oc label node <node-name> node-role.kubernetes.io/app="" Add a label to the worker nodes that you want to act as infrastructure nodes: USD oc label node <node-name> node-role.kubernetes.io/infra="" Check to see if applicable nodes now have the infra role and app roles: USD oc get nodes Create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod's selector. Important If the default node selector key conflicts with the key of a pod's label, then the default node selector is not applied. However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as node-role.kubernetes.io/infra="" , when a pod's label is set to a different node role, such as node-role.kubernetes.io/master="" , can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles. You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts. Edit the Scheduler object: USD oc edit scheduler cluster Add the defaultNodeSelector field with the appropriate node selector: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra="" 1 # ... 1 This example node selector deploys pods on infrastructure nodes by default. Save the file to apply the changes. You can now move infrastructure resources to the newly labeled infra nodes. Additional resources Moving resources to infrastructure machine sets | [
"oc get nodes",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.30.3 node1.example.com Ready worker 7h v1.30.3 node2.example.com Ready worker 7h v1.30.3",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.30.3 node1.example.com NotReady,SchedulingDisabled worker 7h v1.30.3 node2.example.com Ready worker 7h v1.30.3",
"oc get nodes -o wide",
"NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master.example.com Ready master 171m v1.30.3 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.30.3-30.rhaos4.10.gitf2f339d.el8-dev node1.example.com Ready worker 72m v1.30.3 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.30.3-30.rhaos4.10.gitf2f339d.el8-dev node2.example.com Ready worker 164m v1.30.3 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.30.3-30.rhaos4.10.gitf2f339d.el8-dev",
"oc get node <node>",
"oc get node node1.example.com",
"NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.30.3",
"oc describe node <node>",
"oc describe node node1.example.com",
"Name: node1.example.com 1 Roles: worker 2 Labels: kubernetes.io/os=linux kubernetes.io/hostname=ip-10-0-131-14 kubernetes.io/arch=amd64 3 node-role.kubernetes.io/worker= node.kubernetes.io/instance-type=m4.large node.openshift.io/os_id=rhcos node.openshift.io/os_version=4.5 region=east topology.kubernetes.io/region=us-east-1 topology.kubernetes.io/zone=us-east-1a Annotations: cluster.k8s.io/machine: openshift-machine-api/ahardin-worker-us-east-2a-q5dzc 4 machineconfiguration.openshift.io/currentConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/desiredConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 13 Feb 2019 11:05:57 -0500 Taints: <none> 5 Unschedulable: false Conditions: 6 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status Addresses: 7 InternalIP: 10.0.140.16 InternalDNS: ip-10-0-140-16.us-east-2.compute.internal Hostname: ip-10-0-140-16.us-east-2.compute.internal Capacity: 8 attachable-volumes-aws-ebs: 39 cpu: 2 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8172516Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7558116Ki pods: 250 System Info: 9 Machine ID: 63787c9534c24fde9a0cde35c13f1f66 System UUID: EC22BF97-A006-4A58-6AF8-0A38DEEA122A Boot ID: f24ad37d-2594-46b4-8830-7f7555918325 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.30.3-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 Kubelet Version: v1.30.3 Kube-Proxy Version: v1.30.3 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (12 in total) 10 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- openshift-cluster-node-tuning-operator tuned-hdl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-dns dns-default-l69zr 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-image-registry node-ca-9hmcg 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ingress router-default-76455c45c-c5ptv 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-machine-config-operator machine-config-daemon-cvqw9 20m (1%) 0 (0%) 50Mi (0%) 0 (0%) openshift-marketplace community-operators-f67fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-monitoring alertmanager-main-0 50m (3%) 50m (3%) 210Mi (2%) 10Mi (0%) openshift-monitoring node-exporter-l7q8d 10m (0%) 20m (1%) 20Mi (0%) 40Mi (0%) openshift-monitoring prometheus-adapter-75d769c874-hvb85 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-multus multus-kw8w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ovn-kubernetes ovnkube-node-t4dsn 80m (0%) 0 (0%) 1630Mi (0%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 380m (25%) 270m (18%) memory 880Mi (11%) 250Mi (3%) attachable-volumes-aws-ebs 0 0 Events: 11 Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet. #",
"oc get pod --selector=<nodeSelector>",
"oc get pod --selector=kubernetes.io/os",
"oc get pod -l=<nodeSelector>",
"oc get pod -l kubernetes.io/os=linux",
"oc get pod --all-namespaces --field-selector=spec.nodeName=<nodename>",
"oc adm top nodes",
"NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61% ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18% ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33% ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82% ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45% ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77% ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72%",
"oc adm top node --selector=''",
"oc adm cordon <node1>",
"node/<node1> cordoned",
"oc get node <node1>",
"NAME STATUS ROLES AGE VERSION <node1> Ready,SchedulingDisabled worker 1d v1.30.3",
"oc adm drain <node1> <node2> [--pod-selector=<pod_selector>]",
"oc adm drain <node1> <node2> --force=true",
"oc adm drain <node1> <node2> --grace-period=-1",
"oc adm drain <node1> <node2> --ignore-daemonsets=true",
"oc adm drain <node1> <node2> --timeout=5s",
"oc adm drain <node1> <node2> --delete-emptydir-data=true",
"oc adm drain <node1> <node2> --dry-run=true",
"oc adm uncordon <node1>",
"oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n>",
"oc label nodes webconsole-7f7f6 unhealthy=true",
"kind: Node apiVersion: v1 metadata: name: webconsole-7f7f6 labels: unhealthy: 'true' #",
"oc label pods --all <key_1>=<value_1>",
"oc label pods --all status=unhealthy",
"oc adm cordon <node>",
"oc adm cordon node1.example.com",
"node/node1.example.com cordoned NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled",
"oc adm uncordon <node1>",
"oc delete pods --field-selector status.phase=Failed -n <POD_NAMESPACE>",
"oc get machinesets -n openshift-machine-api",
"oc scale --replicas=2 machineset <machine-set-name> -n openshift-machine-api",
"oc edit machineset <machine-set-name> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # name: <machine-set-name> namespace: openshift-machine-api # spec: replicas: 2 1 #",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force=true",
"oc delete node <node_name>",
"oc get machineconfigpool --show-labels",
"NAME CONFIG UPDATED UPDATING DEGRADED LABELS master rendered-master-e05b81f5ca4db1d249a1bf32f9ec24fd True False False operator.machineconfiguration.openshift.io/required-for-upgrade= worker rendered-worker-f50e78e1bc06d8e82327763145bfcf62 True False False",
"oc label machineconfigpool worker custom-kubelet=enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: enabled 2 kubeletConfig: 3 podsPerCore: 10 maxPods: 250 systemReserved: cpu: 2000m memory: 1Gi #",
"oc create -f <file-name>",
"oc create -f master-kube-config.yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: ci-ln-hmy310k-72292-5f87z-worker-a namespace: openshift-machine-api spec: template: spec: providerSpec: value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-412-85-202203181601-0-gcp-x86-64 1",
"oc edit MachineConfiguration cluster",
"apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All 2",
"apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: Partial partial: machineResourceSelector: matchLabels: update-boot-image: \"true\" 2",
"oc label machineset.machine ci-ln-hmy310k-72292-5f87z-worker-a update-boot-image=true -n openshift-machine-api",
"oc get machinesets <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: ci-ln-77hmkpt-72292-d4pxp update-boot-image: \"true\" name: ci-ln-77hmkpt-72292-d4pxp-worker-a namespace: openshift-machine-api spec: template: spec: providerSpec: value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-416-92-202402201450-0-gcp-x86-64 1",
"oc edit MachineConfiguration cluster",
"apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All",
"oc edit schedulers.config.openshift.io cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: \"2019-09-10T03:04:05Z\" generation: 1 name: cluster resourceVersion: \"433\" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: a636d30a-d377-11e9-88d4-0a60097bee62 spec: mastersSchedulable: false 1 status: {} #",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux booleans Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_manage_cgroup=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service #",
"oc create -f 99-worker-setsebool.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: kernelArguments: - enforcing=0 3",
"oc create -f 05-worker-kernelarg-selinuxpermissive.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.30.3 ip-10-0-136-243.ec2.internal Ready master 34m v1.30.3 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.30.3 ip-10-0-142-249.ec2.internal Ready master 34m v1.30.3 ip-10-0-153-11.ec2.internal Ready worker 28m v1.30.3 ip-10-0-153-150.ec2.internal Ready master 34m v1.30.3",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16 coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit",
"oc label machineconfigpool worker kubelet-swap=enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: swap-config spec: machineConfigPoolSelector: matchLabels: kubelet-swap: enabled kubeletConfig: failSwapOn: false 1 memorySwap: swapBehavior: LimitedSwap 2 #",
"#!/usr/bin/env bash set -Eeuo pipefail if [ USD# -lt 1 ]; then echo \"Usage: 'USD0 node_name'\" exit 64 fi Check for admin OpenStack credentials openstack server list --all-projects >/dev/null || { >&2 echo \"The script needs OpenStack admin credentials. Exiting\"; exit 77; } Check for admin OpenShift credentials adm top node >/dev/null || { >&2 echo \"The script needs OpenShift admin credentials. Exiting\"; exit 77; } set -x declare -r node_name=\"USD1\" declare server_id server_id=\"USD(openstack server list --all-projects -f value -c ID -c Name | grep \"USDnode_name\" | cut -d' ' -f1)\" readonly server_id Drain the node adm cordon \"USDnode_name\" adm drain \"USDnode_name\" --delete-emptydir-data --ignore-daemonsets --force Power off the server debug \"node/USD{node_name}\" -- chroot /host shutdown -h 1 Verify the server is shut off until openstack server show \"USDserver_id\" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done Migrate the node openstack server migrate --wait \"USDserver_id\" Resize the VM openstack server resize confirm \"USDserver_id\" Wait for the resize confirm to finish until openstack server show \"USDserver_id\" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done Restart the VM openstack server start \"USDserver_id\" Wait for the node to show up as Ready: until oc get node \"USDnode_name\" | grep -q \"^USD{node_name}[[:space:]]\\+Ready\"; do sleep 5; done Uncordon the node adm uncordon \"USDnode_name\" Wait for cluster operators to stabilize until oc get co -o go-template='statuses: {{ range .items }}{{ range .status.conditions }}{{ if eq .type \"Degraded\" }}{{ if ne .status \"False\" }}DEGRADED{{ end }}{{ else if eq .type \"Progressing\"}}{{ if ne .status \"False\" }}PROGRESSING{{ end }}{{ else if eq .type \"Available\"}}{{ if ne .status \"True\" }}NOTAVAILABLE{{ end }}{{ end }}{{ end }}{{ end }}' | grep -qv '\\(DEGRADED\\|PROGRESSING\\|NOTAVAILABLE\\)'; do sleep 5; done",
"hosts: - hostname: extra-worker-1 rootDeviceHints: deviceName: /dev/sda interfaces: - macAddress: 00:00:00:00:00:00 name: eth0 networkConfig: interfaces: - name: eth0 type: ethernet state: up mac-address: 00:00:00:00:00:00 ipv4: enabled: true address: - ip: 192.168.122.2 prefix-length: 23 dhcp: false - hostname: extra-worker-2 rootDeviceHints: deviceName: /dev/sda interfaces: - macAddress: 00:00:00:00:00:02 name: eth0 networkConfig: interfaces: - name: eth0 type: ethernet state: up mac-address: 00:00:00:00:00:02 ipv4: enabled: true address: - ip: 192.168.122.3 prefix-length: 23 dhcp: false",
"oc adm node-image create nodes-config.yaml",
"oc adm node-image monitor --ip-addresses <ip_addresses>",
"oc adm certificate approve <csr_name>",
"oc adm node-image create --mac-address=<mac_address>",
"oc adm node-image monitor --ip-addresses <ip_address>",
"oc adm certificate approve <csr_name>",
"hosts:",
"hosts: hostname:",
"hosts: interfaces:",
"hosts: interfaces: name:",
"hosts: interfaces: macAddress:",
"hosts: rootDeviceHints:",
"hosts: rootDeviceHints: deviceName:",
"hosts: networkConfig:",
"cpuArchitecture",
"sshKey",
"kubeletConfig: podsPerCore: 10",
"kubeletConfig: maxPods: 250",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #",
"oc create -f <file_name>.yaml",
"oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False",
"oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False",
"get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator",
"profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings",
"recommend: <recommend-item-1> <recommend-item-n>",
"- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9",
"- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4",
"- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/ocp-tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40",
"oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;",
"apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: registry 4 operator: In 5 values: - default topologyKey: kubernetes.io/hostname #",
"oc adm cordon <node1>",
"oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force",
"error when evicting pods/\"rails-postgresql-example-1-72v2w\" -n \"rails\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.",
"oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction",
"oc debug node/<node1>",
"chroot /host",
"systemctl reboot",
"ssh core@<master-node>.<cluster_name>.<base_domain>",
"sudo systemctl reboot",
"oc adm uncordon <node1>",
"ssh core@<target_node>",
"sudo oc adm uncordon <node> --kubeconfig /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig",
"oc get node <node1>",
"NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: evictionSoft: 3 memory.available: \"500Mi\" 4 nodefs.available: \"10%\" nodefs.inodesFree: \"5%\" imagefs.available: \"15%\" imagefs.inodesFree: \"10%\" evictionSoftGracePeriod: 5 memory.available: \"1m30s\" nodefs.available: \"1m30s\" nodefs.inodesFree: \"1m30s\" imagefs.available: \"1m30s\" imagefs.inodesFree: \"1m30s\" evictionHard: 6 memory.available: \"200Mi\" nodefs.available: \"5%\" nodefs.inodesFree: \"4%\" imagefs.available: \"10%\" imagefs.inodesFree: \"5%\" evictionPressureTransitionPeriod: 3m 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #",
"oc create -f <file_name>.yaml",
"oc create -f gc-container.yaml",
"kubeletconfig.machineconfiguration.openshift.io/gc-container created",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True",
"[Allocatable] = [Node Capacity] - [system-reserved] - [Hard-Eviction-Thresholds]",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: dynamic-node 1 spec: autoSizingReserved: true 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 3 #",
"oc create -f <file_name>.yaml",
"oc debug node/<node_name>",
"chroot /host",
"SYSTEM_RESERVED_MEMORY=3Gi SYSTEM_RESERVED_CPU=0.08",
"oc edit machineconfigpool <name>",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-allocatable 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: systemReserved: 3 cpu: 1000m memory: 1Gi #",
"oc create -f <file_name>.yaml",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= pools.operator.machineconfiguration.openshift.io/worker= 1 Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool #",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-reserved-cpus 1 spec: kubeletConfig: reservedSystemCPUs: \"0,1,2,3\" 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 3 #",
"oc create -f <file_name>.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\"",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 4 #",
"oc create -f <filename>",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-4.4# cat /etc/kubernetes/kubelet.conf",
"\"kind\": \"KubeletConfiguration\", \"apiVersion\": \"kubelet.config.k8s.io/v1beta1\", # \"tlsCipherSuites\": [ \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256\", \"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256\" ], \"tlsMinVersion\": \"VersionTLS12\", #",
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"oc label node <node-name> node-role.kubernetes.io/app=\"\"",
"oc label node <node-name> node-role.kubernetes.io/infra=\"\"",
"oc get nodes",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra=\"\" 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/nodes/working-with-nodes |
5.8. Automating the Installation with Kickstart | 5.8. Automating the Installation with Kickstart Red Hat Enterprise Linux 7 offers a way to partially or fully automate the installation process using a Kickstart file . Kickstart files contain answers to all questions normally asked by the installation program, such as what time zone do you want the system to use, how should the drives be partitioned or which packages should be installed. Providing a prepared Kickstart file at the beginning of the installation therefore allows you to perform the entire installation (or parts of it) automatically, without need for any intervention from the user. This is especially useful when deploying Red Hat Enterprise Linux on a large number of systems at once. In addition to allowing you to automate the installation, Kickstart files also provide more options regarding software selection. When installing Red Hat Enterprise Linux manually using the graphical installation interface, your software selection is limited to pre-defined environments and add-ons. A Kickstart file allows you to install or remove individual packages as well. For instructions about creating a Kickstart file and using it to automate the installation, see Chapter 27, Kickstart Installations . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-installation-planning-kickstart-x86 |
Part I. Planning a Red Hat Process Automation installation | Part I. Planning a Red Hat Process Automation installation As a system administrator, you have several options for installing Red Hat Process Automation. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/assembly-planning |
Chapter 2. Analyzing Metrics | Chapter 2. Analyzing Metrics Kibana offers two ways of analyzing metrics: Build your own visualizations such as charts, graphs, and tables. Load and use predefined sets of visualizations Red Hat suggests that you start off by using the predefined visualizations. Each set is known as a dashboard . Dashboards have the advantage of enabling you to quickly access a wide range of metrics while offering the flexibility of changing them to match your individual needs. 2.1. Using Dashboards A dashboard displays a set of saved visualizations. Dashboards have the advantage of enabling you to quickly access a wide range of metrics while offering the flexibility of changing them to match your individual needs. You can use the Dashboard tab to create your own dashboards. Alternatively, Red Hat provides the following dashboard examples, which you can import into Kibana and use as is or customize to suit your specific needs: System dashboard Hosts dashboard VMs dashboard Importing Dashboard Examples Note The dashboard examples are only available after completing the procedure for Deploying collectd and rsyslog . Copy the /etc/ovirt-engine-metrics/dashboards-examples directory from the Manager virtual machine to your local machine. Log in to the Kibana console using the URL ( https://kibana. example.com ) that you recorded during the Metrics Store installation process. Use the default admin user, and the password you defined during the installation. Open Kibana and click the Management tab. Click the Saved Objects tab. Click Import and import Searches from your local copy of /etc/ovirt-engine-metrics/dashboards-examples . Click Import and import Visualizations . Note If you see an error message while importing the visualizations, check your hosts to ensure that collectd and rsyslog are running without errors. Click Import and import Dashboards . Note If you are logged in as the admin user, you may see a message regarding missing index patterns while importing the visualizations. Select the project. * index pattern instead. In the Elasticsearch Kibana portal, go to Home Management Stack Management , select Kibana Index Patterns , and add the following index patterns: project.ovirt-metrics-<ovirt_env_name>* project.ovirt-logs-<ovirt_env_name>* Click Refresh for each of the new index patterns. The imported dashboards are now stored in the system. Loading Saved Dashboards Once you have created and saved a dashboard, or imported Red Hat's sample dashboards, you can display them in the Dashboard tab: Click the Dashboards tab to display a list of saved dashboards. Click a saved dashboard to load it. 2.2. Creating a New Visualization Use the Visualize page to design data visualizations based on the metrics or log data collected by Metrics Store. You can save these visualizations, use them individually, or combine visualizations into a dashboard. A visualization can be based on one of the following data source types: A new interactive search A saved search An existing saved visualization Visualizations are based on Elasticsearch's aggregation feature . Creating a New Visualization Kibana guides you through the creation process with the help of a visualization wizard. To start the new visualization wizard, click the Visualize tab. In step 1, Create a new visualization table , select the type of visualization you want to create. In step 2, Select a search source , select whether you want to create a new search or reuse a saved search: To create a new search, select From a new search and enter the indexes to use as the source. Use project.ovirt-logs prefix for log data or project.ovirt-metrics prefix for metric data. To create a visualization from a saved search, select From a saved search and enter the name of the search. The visualization editor appears. 2.3. Graphic User Interface Elements The visualization editor consists of three main areas: The toolbar The aggregation builder The preview pane Visualization Editor 2.4. Using the Visualization Editor Use the visualization editor to create visualizations by: Submitting search queries from the toolbar Selecting metrics and aggregations from the aggregation builder 2.4.1. Submitting Search Queries Use the toolbar to perform search queries based on the Lucene query parser syntax. For a detailed explanation of this syntax, see Apache Lucene - Query Parser Syntax . 2.4.2. Selecting Metrics and Aggregations Use the aggregation builder to define which metrics to display, how to aggregate the data, and how to group the results. The aggregation builder performs two types of aggregations, metric and bucket, which differ depending on the type of visualization you are creating: Bar, line, or area chart visualizations use metrics for the y-axis and buckets for the x-axis, segment bar colors, and row/column splits. Pie charts use metrics for the slice size and buckets to define the number of slices. To define a visualization from the aggregation bar: Select the metric aggregation for your visualization's y-axis from the Aggregation drop-down list in the metrics section, for example, count, average, sum, min, max, or unique count. For more information about how these aggregations are calculated, see Metrics Aggregation in the Elasticsearch Reference documentation. Use the buckets area to select the aggregations for the visualization's x-axis, color slices, and row/column splits: Use the Aggregation drop-down list to define how to aggregate the bucket. Common bucket aggregations include date histogram, range, terms, filters, and significant terms. The order in which you define the buckets determines the order in which they will be executed, so the first aggregation determines the data set for any subsequent aggregations. For more information, see Aggregation Builder in the Kibana documentation. Select the metric you want to display from the Field drop-down list. For details about each of the available metrics, see Metrics Schema . Select the required interval from the Interval field. Click Apply Changes . 2.5. Metrics Schema The following sections describe the metrics that are available from the Field menu when creating visualizations. Note All metric values are collected at 10 second intervals. 2.5.1. Aggregation Metrics The Aggregation metric aggregates several values into one using aggregation functions such as sum, average, min, and max. It is used to provide a combined value for average and total CPU statistics. The following table describes the aggregation metrics reported by the Aggregation plugin. Metric Name collectd.type_instance Description collectd.aggregation.percent interrupt user wait nice softirq system idle steal The average and total CPU usage, as an aggregated percentage, for each of the collectd.type_instance states. Additional Values collectd.plugin: Aggregation collectd.type_instance: cpu-average / cpu-sum collectd.plugin_instance: collectd.type: percent ovirt.entity: host ovirt.cluster.name.raw: The cluster's name ovirt.engine_fqdn.raw: The Manager's FQDN hostname: The host's FQDN ipaddr4: IP address interval: 10 collectd.dstypes: Gauge 2.5.2. CPU Metrics CPU metrics display the amount of time spent by the hosts' CPUs, as a percentage. The following table describes CPU metrics as reported by the CPU plugin. Table 2.1. CPU Metrics Metric Name collectd.type_instance Description collectd.cpu.percent interrupt user wait nice softirq system idle steal The percentage of time spent, per CPU, in the collectd.type_instance states. Additional Values collectd.plugin: CPU collectd.plugin_instance: The CPU's number collectd.type: percent ovirt.entity: host ovirt.cluster.name.raw: The cluster's name ovirt.engine_fqdn.raw: The Manager's FQDN hostname: The host's FQDN ipaddr4: IP address interval: 10 collectd.dstypes: Gauge 2.5.3. CPU Load Average Metrics CPU load represents CPU contention, that is, the average number of schedulable processes at any given time. This is reported as an average value for all CPU cores on the host. Each CPU core can only execute one process at a time. Therefore, a CPU load average above 1.0 indicates that the CPUs have more work than they can perform, and the system is overloaded. CPU load is reported over short term (last one minute), medium term (last five minutes) and long term (last fifteen minutes). While it is normal for a host's short term load average to exceed 1.0 (for a single CPU), sustained load average above 1.0 on a host may indicate a problem. On multi-processor systems, the load is relative to the number of processor cores available. The "100% utilization" mark is 1.00 on a single-core, 2.00 on a dual-core, 4.00 on a quad-core system. Red Hat recommends looking at CPU load in conjunction with CPU Metrics . The following table describes the CPU load metrics reported by the Load plugin. Table 2.2. CPU Load Average Metrics Metric Name Description collectd.load.load.longterm Average number of schedulable processes per CPU core over the last 15 minutes. A value above 1.0 indicates the system was overloaded during the last 15 minutes. collectd.load.load.midterm Average number of schedulable processes per CPU core over the last five minutes. A value above 1.0 indicates the system was overloaded during the last 5 minutes. collectd.load.load.shortterm Average number of schedulable processes per CPU core over the last one minute. A value above 1.0 indicates the system was overloaded during the last minute. Additional Values collectd.plugin: Load collectd.type: load collectd.type_instance: None collectd.plugin_instance: None ovirt.entity: host ovirt.cluster.name.raw: The cluster's name ovirt.engine_fqdn.raw: The Manager's FQDN hostname: The host's FQDN ipaddr4: IP address interval: 10 collectd.dstypes: Gauge 2.5.4. Disk Consumption Metrics Disk consumption (DF) metrics enable you to monitor metrics about disk consumption, such as the used, reserved, and free space for each mounted file system. The following table describes the disk consumption metrics reported by the DF plugin. Metric Name Description collectd.df.df_complex The amount of free, used, and reserved disk space, in bytes, on this file system. collectd.df.percent_bytes The amount of free, used, and reserved disk space, as a percentage of total disk space, on this file system. Additional Values collectd.plugin: DF collectd.type_instance: free, used, reserved collectd.plugin_instance: A mounted partition ovirt.entity: host ovirt.cluster.name.raw: The cluster's name ovirt.engine_fqdn.raw: The Manager's FQDN hostname: The host's FQDN ipaddr4: IP address interval: 10 collectd.dstypes: Gauge 2.5.5. Disk Operation Metrics Disk operation metrics are reported per physical disk on the host, and per partition. The following table describes the disk operation metrics reported by the Disk plugin. Table 2.3. Disk Operation Metrics Metric Name Description collectd.dstypes collectd.disk.disk_ops.read The number of disk read operations. Derive collectd.disk.disk_ops.write The number of disk write operations. Derive collectd.disk.disk_merged.read The number of disk reads that have been merged into single physical disk access operations. In other words, this metric measures the number of instances in which one physical disk access served multiple disk reads. The higher the number, the better. Derive collectd.disk.disk_merged.write The number of disk writes that were merged into single physical disk access operations. In other words, this metric measures the number of instances in which one physical disk access served multiple write operations. The higher the number, the better. Derive collectd.disk.disk_time.read The average amount of time it took to do a read operation, in milliseconds. Derive collectd.disk.disk_time.write The average amount of time it took to do a write operation, in milliseconds. Derive collectd.disk.pending_operations The queue size of pending I/O operations. Gauge collectd.disk.disk_io_time.io_time The time spent doing I/Os in milliseconds. This can be used as a device load percentage, where a value of 1 second of time spent represents a 100% load. Derive collectd.disk.disk_io_time.weighted_io_time A measure of both I/O completion time and the backlog that may be accumulating. Derive Additional Values collectd.plugin: Disk collectd.type_instance: None collectd.plugin_instance: The disk's name ovirt.entity: host ovirt.cluster.name.raw: The cluster's name ovirt.engine_fqdn.raw: The Manager's FQDN hostname: The host's FQDN ipaddr4: IP address interval: 10 2.5.6. Entropy Metrics Entropy metrics display the available entropy pool size on the host. Entropy is important for generating random numbers, which are used for encryption, authorization, and similar tasks. The following table describes the entropy metrics reported by the Entropy plugin. Table 2.4. Entropy Metrics Metric Name Description collectd.entropy.entropy The entropy pool size, in bits, on the host. Additional Values collectd.plugin: Entropy collectd.type_instance: None collectd.plugin_instance: None ovirt.entity: host ovirt.cluster.name.raw: The cluster's name ovirt.engine_fqdn.raw: The Manager's FQDN hostname: The host's FQDN ipaddr4: IP address interval: 10 collectd.dstypes: Gauge 2.5.7. Network Interface Metrics The following types of metrics are reported from physical and virtual network interfaces on the host: Bytes (octets) transmitted and received (total, or per second) Packets transmitted and received (total, or per second) Interface errors (total, or per second) The following table describes the network interface metrics reported by the Interface plugin. Table 2.5. Network Interface Metrics collectd.type Metric Name Description if_octets collectd.interface.if_octets.rx A count of the bytes received by the interface. You can view this metric as a Rate/sec or a cumulative count (Max): * Rate/sec: Provides the current traffic level on the interface in bytes/sec. * Max: Provides the cumulative count of bytes received. Note that since this metric is a cumulative counter, its value will periodically restart from zero when the maximum possible value of the counter is exceeded. if_octets collectd.interface.if_octets.tx A count of the bytes transmitted by the interface. You can view this metric as a Rate/sec or a cumulative count (Max): * Rate/sec: Provides the current traffic level on the interface in bytes/sec. * Max: Provides the cumulative count of bytes transmitted. Note that since this metric is a cumulative counter, its value will periodically restart from zero when the maximum possible value of the counter is exceeded. if_packets collectd.interface.if_packets.rx A count of the packets received by the interface. You can view this metric as a Rate/sec or a cumulative count (Max): * Rate/sec: Provides the current traffic level on the interface in bytes/sec. * Max: Provides the cumulative count of packets received. Note that since this metric is a cumulative counter, its value will periodically restart from zero when the maximum possible value of the counter is exceeded. if_packets collectd.interface.if_packets.tx A count of the packets transmitted by the interface. You can view this metric as a Rate/sec or a cumulative count (Max): * Rate/sec: Provides the current traffic level on the interface in packets/sec. * Max: Provides the cumulative count of packets transmitted. Note that since this metric is a cumulative counter, its value will periodically restart from zero when the maximum possible value of the counter is exceeded. if_errors collectd.interface.if_errors.rx A count of errors received on the interface. You can view this metric as a Rate/sec or a cumulative count (Max). * Rate/sec rollup provides the current rate of errors received on the interface in errors/sec. * Max rollup provides the total number of errors received since the beginning. Note that since this is a cumulative counter, its value will periodically restart from zero when the maximum possible value of the counter is exceeded. if_errors collectd.interface.if_errors.tx A count of errors transmitted on the interface. You can view this metric as a Rate/sec or a cumulative count (Max). * Rate/sec rollup provides the current rate of errors transmitted on the interface in errors/sec. * Max rollup provides the total number of errors transmitted since the beginning. Note that since this is a cumulative counter, its value will periodically restart from zero when the maximum possible value of the counter is exceeded. if_dropped collectd.interface.if_dropped.rx A count of dropped packets received on the interface. You can view this metric as a Rate/sec or a cumulative count (Max). * Rate/sec rollup provides the current rate of dropped packets received on the interface in packets/sec. * Max rollup provides the total number of dropped packets received since the beginning. Note that since this is a cumulative counter, its value will periodically restart from zero when the maximum possible value of the counter is exceeded. if_dropped collectd.interface.if_dropped.tx A count of dropped packets transmitted on the interface. You can view this metric as a Rate/sec or a cumulative count (Max). * Rate/sec rollup provides the current rate of dropped packets transmitted on the interface in packets/sec. * Max rollup provides the total number of dropped packets transmitted since the beginning. Note that since this is a cumulative counter, its value will periodically restart from zero when the maximum possible value of the counter is exceeded. Additional Values collectd.plugin: Interface collectd.type_instance: None collectd.plugin_instance: The network's name ovirt.entity: host ovirt.cluster.name.raw: The cluster's name ovirt.engine_fqdn.raw: The Manager's FQDN hostname: The host's FQDN ipaddr4: IP address interval: 10 collectd.dstypes: Derive 2.5.8. Memory Metrics Metrics collected about memory usage. The following table describes the memory usage metrics reported by the Memory plugin. Table 2.6. Memory Metrics Metric Name collectd.type collectd.type_instance Description collectd.memory.memory memory used The total amount of memory used. free The total amount of unused memory. cached The amount of memory used for caching disk data for reads, memory-mapped files, or tmpfs data. buffered The amount of memory used for buffering, mostly for I/O operations. slab_recl The amount of reclaimable memory used for slab kernel allocations. slab_unrecl Amount of unreclaimable memory used for slab kernel allocations. collectd.memory.percent percent used The total amount of memory used, as a percentage. free The total amount of unused memory, as a percentage. cached The amount of memory used for caching disk data for reads, memory-mapped files, or tmpfs data, as a percentage. buffered The amount of memory used for buffering I/O operations, as a percentage. slab_recl The amount of reclaimable memory used for slab kernel allocations, as a percentage. slab_unrecl The amount of unreclaimable memory used for slab kernel allocations, as a percentage. Additional Values collectd.plugin: Memory collectd.plugin_instance: None ovirt.entity: Host ovirt.cluster.name.raw: The cluster's name ovirt.engine_fqdn.raw: The Manager's FQDN hostname: The host's FQDN ipaddr4: IP address interval: 10 collectd.dstypes: Gauge 2.5.9. NFS Metrics NFS metrics enable you to analyze the use of NFS procedures. The following table describes the NFS metrics reported by the NFS plugin. Metric Name collectd.type_instance Description collectd.nfs.nfs_procedure null / getattr / lookup / access / readlink / read / write / create / mkdir / symlink / mknod / rename / readdir / remove / link / fsstat / fsinfo / readdirplus / pathconf / rmdir / commit / compound / reserved / access / close / delegpurge / putfh / putpubfh putrootfh / renew / restorefh / savefh / secinfo / setattr / setclientid / setcltid_confirm / verify / open / openattr / open_confirm / exchange_id / create_session / destroy_session / bind_conn_to_session / delegreturn / getattr / getfh / lock / lockt / locku / lookupp / open_downgrade / nverify / release_lockowner / backchannel_ctl / free_stateid / get_dir_delegation / getdeviceinfo / getdevicelist / layoutcommit / layoutget / layoutreturn / secinfo_no_name / sequence / set_ssv / test_stateid / want_delegation / destroy_clientid / reclaim_complete The number of processes per collectd.type_instance state. Additional Values collectd.plugin: NFS collectd.plugin_instance: File system + server or client (for example: v3client) collectd.type: nfs_procedure ovirt.entity: host ovirt.cluster.name.raw: The cluster's name ovirt.engine_fqdn.raw: The Manager's FQDN hostname: The host's FQDN ipaddr4: IP address interval: 10 collectd.dstypes: Derive 2.5.10. PostgreSQL Metrics PostgreSQL data collected by executing SQL statements on a PostgreSQL database. The following table describes the PostgreSQL metrics reported by the PostgreSQL plugin. Table 2.7. PostgreSQL Metrics Metric Name collectd.type_instance Description collectd.postgresql.pg_numbackends N/A How many server processes this database is using. collectd.postgresql.pg_n_tup_g live The number of live rows in the database. dead The number of dead rows in the database. Rows that are deleted or obsoleted by an update are not physically removed from their table; they remain present as dead rows until a VACUUM is performed. collectd.postgresql.pg_n_tup_c del The number of delete operations. upd The number of update operations. hot_upd The number of update operations that have been performed without requiring an index update. ins The number of insert operations. collectd.postgresql.pg_xact num_deadlocks The number of deadlocks that have been detected by the database. Deadlocks are caused by two or more competing actions that are unable to finish because each is waiting for the other's resources to be unlocked. collectd.postgresql.pg_db_size N/A The size of the database on disk, in bytes. collectd.postgresql.pg_blks heap_read How many disk blocks have been read. heap_hit How many read operations were served from the buffer in memory, so that a disk read was not necessary. This only includes hits in the PostgreSQL buffer cache, not the operating system's file system cache. idx_read How many disk blocks have been read by index access operations. idx_hit How many index access operations have been served from the buffer in memory. toast_read How many disk blocks have been read on TOAST tables. toast_hit How many TOAST table reads have been served from buffer in memory. tidx_read How many disk blocks have been read by index access operations on TOAST tables. Additional Values collectd.plugin: Postgresql collectd.plugin_instance: Database's Name ovirt.entity: engine ovirt.cluster.name.raw: The cluster's name ovirt.engine_fqdn.raw: The Manager's FQDN hostname: The host's FQDN ipaddr4: IP address interval: 10 collectd.dstypes: Gauge 2.5.11. Process Metrics The following table describes the process metrics reported by the Processes plugin. Metric Name collectd.type Description collectd.dstypes collectd.processes.ps_state ps_state The number of processes in each state. Gauge collectd.processes.ps_disk_ops.read ps_disk_ops The process's I/O read operations. Derive collectd.processes.ps_disk_ops.write ps_disk_ops The process's I/O write operations. Derive collectd.processes.ps_vm ps_vm The total amount of memory including swap. Gauge collectd.processes.ps_rss ps_rss The amount of physical memory assigned to the process. Gauge collectd.processes.ps_data ps_data Gauge collectd.processes.ps_code ps_code Gauge collectd.processes.ps_stacksize ps_stacksize Gauge collectd.processes.ps_cputime.syst ps_cputime The amount of time spent by the matching processes in kernel mode. The values are scaled to microseconds per second to match collectd's numbers. Derive collectd.processes.ps_cputime.user ps_cputime The amount of time spent by the matching processes in user mode. The values are scaled to microseconds per second. Derive collectd.processes.ps_count.processes ps_count The number of processes for the defined process. Gauge collectd.processes.ps_count.threads ps_count The number of threads for the defined process. Gauge collectd.processes.ps_pagefaults.majfltadd ps_pagefaults The number of major page faults caused by the process. Derive collectd.processes.ps_pagefaults.minflt ps_pagefaults The number of major page faults caused by the process. Derive collectd.processes.ps_disk_octets.write ps_disk_octets The process's I/O write operations in transferred bytes. Derive collectd.processes.ps_disk_octets.read ps_disk_octets The process's I/O read operations in transferred bytes. Derive collectd.processes.fork_rate fork_rate The system's fork rate. Derive Additional Values collectd.plugin: Processes collectd.type_instance: N/A (except for collectd.processes.ps_state=running/ zombies/ stopped/ paging/ blocked/ sleeping) ovirt.entity: host ovirt.cluster.name.raw: The cluster's name ovirt.engine_fqdn.raw: The Manager's FQDN hostname: The host's FQDN ipaddr4: IP address interval: 10 2.5.12. StatsD Metrics The following table describes the StatsD metrics reported by the StatsD plugin. Metric Name collectd.type collectd.type_instance Description collectd.statsd.host_storage host_storage storage uuid The latency for writing to the storage domain. collectd.statsd.vm_balloon_cur vm_balloon_cur N/A The current amount of memory available to the guest virtual machine (in KB). collectd.statsd.vm_balloon_max vm_balloon_max N/A The maximum amount of memory available to the guest virtual machine (in KB). collectd.statsd.vm_balloon_min vm_balloon_min N/A The minimum amount of memory guaranteed to the guest virtual machine (in KB). collectd.statsd.vm_balloon_target vm_balloon_target N/A The amount of memory requested (in KB). collectd.statsd.vm_cpu_sys vm_cpu_sys N/A Ratio of non-guest virtual machine CPU time to total CPU time spent by QEMU. collectd.statsd.vm_cpu_usage vm_cpu_usage N/A Total CPU usage since VM start in (ns). collectd.statsd.vm_cpu_user vm_cpu_user N/A Ratio of guest virtual machine CPU time to total CPU time spent by QEMU. collectd.statsd.vm_disk_apparent_size vm_disk_apparent_size disk name The size of the disk (in bytes). collectd.statsd.vm_disk_flush_latency vm_disk_flush_latency disk name The virtual disk's flush latency (in seconds). collectd.statsd.vm_disk_read_bytes vm_disk_read_bytes disk name The read rate from disk (in bytes per second). collectd.statsd.vm_disk_read_latency vm_disk_read_latency disk name The virtual disk's read latency (in seconds). collectd.statsd.vm_disk_read_ops vm_disk_read_ops disk name The number of read operations since the virtual machine was started. collectd.statsd.vm_disk_read_rate vm_disk_read_rate disk name The virtual machine's read activity rate (in bytes per second). collectd.statsd.vm_disk_true_size vm_disk_true_size disk name The amount of underlying storage allocated (in bytes). collectd.statsd.vm_disk_write_latency vm_disk_write_latency disk name The virtual disk's write latency (in seconds). collectd.statsd.vm_disk_write_ops vm_disk_write_ops disk name The number of write operations since the virtual machine was started. collectd.statsd.vm_disk_write_rate vm_disk_write_rate disk name The virtual machine's write activity rate (in bytes per second). collectd.statsd.vm_nic_rx_bytes vm_nic_rx_bytes network name The total number of incoming bytes. collectd.statsd.vm_nic_rx_dropped vm_nic_rx_dropped network name The number of incoming packets that have been dropped. collectd.statsd.vm_nic_rx_errors vm_nic_rx_errors network name The number of incoming packets that contained errors. collectd.statsd.vm_nic_speed vm_nic_speed network name The interface speed (in Mbps). collectd.statsd.vm_nic_tx_bytes vm_nic_tx_bytes network name The total number of outgoing bytes. collectd.statsd.vm_nic_tx_dropped vm_nic_tx_dropped network name The number of outgoing packets that were dropped. collectd.statsd.vm_nic_tx_errors vm_nic_tx_errors network name The number of outgoing packets that contained errors. Additional Values collectd.plugin: StatsD collectd.plugin_instance: The virtual machine's name (except for collectd.statsd.host_storage=N/A) ovirt.entity: vm (except for collectd.statsd.host_storage=host) ovirt.cluster.name.raw: The cluster's name ovirt.engine_fqdn.raw: The Manager's FQDN hostname: The host's FQDN ipaddr4: IP address interval: 10 collectd.dstypes: Gauge 2.5.13. Swap Metrics Swap metrics enable you to view the amount of memory currently written onto the hard disk, in bytes, according to available, used, and cached swap space. The following table describes the Swap metrics reported by the Swap plugin. Table 2.8. Swap Metrics Metric Name collectd.type collectd.type_instance collectd.dstypes Description collectd.swap.swap swap used / free / cached Gauge The used, available, and cached swap space (in bytes). collectd.swap.swap_io swap_io in / out Derive The number of swap pages written and read per second. collectd.swap.percent percent used / free / cached Gauge The percentage of used, available, and cached swap space. Additional Fields collectd.plugin: Swap collectd.plugin_instance: None ovirt.entity: host or Manager ovirt.cluster.name.raw: The cluster's name ovirt.engine_fqdn.raw: The Manager's FQDN hostname: The host's FQDN ipaddr4: IP address interval: 10 2.5.14. Virtual Machine Metrics The following table describes the virtual machine metrics reported by the Virt plugin. Metric Name collectd.type collectd.type_instance collectd.dstypes collectd.virt.ps_cputime.syst ps_cputime.syst N/A Derive collectd.virt.percent percent virt_cpu_total Gauge collectd.virt.ps_cputime.user ps_cputime.user N/A Derive collectd.virt.virt_cpu_total virt_cpu_total CPU number Derive collectd.virt.virt_vcpu virt_vcpu CPU number Derive collectd.virt.disk_octets.read disk_octets.read disk name Gauge collectd.virt.disk_ops.read disk_ops.read disk name Gauge collectd.virt.disk_octets.write disk_octets.write disk name Gauge collectd.virt.disk_ops.write disk_ops.write disk name Gauge collectd.virt.if_octets.rx if_octets.rx network name Derive collectd.virt.if_dropped.rx if_dropped.rx network name Derive collectd.virt.if_errors.rx if_errors.rx network name Derive collectd.virt.if_octets.tx if_octets.tx network name Derive collectd.virt.if_dropped.tx if_dropped.tx network name Derive collectd.virt.if_errors.tx if_errors.tx network name Derive collectd.virt.if_packets.rx if_packets.rx network name Derive collectd.virt.if_packets.tx if_packets.tx network name Derive collectd.virt.memory memory rss / total /actual_balloon / available / unused / usable / last_update / major_fault / minor_fault / swap_in / swap_out Gauge collectd.virt.total_requests total_requests flush-DISK Derive collectd.virt.total_time_in_ms total_time_in_ms flush-DISK Derive collectd.virt.total_time_in_ms total_time_in_ms flush-DISK Derive Additional Values collectd.plugin: virt collectd.plugin_instance: The virtual machine's name ovirt.entity: vm ovirt.cluster.name.raw: The cluster's name ovirt.engine_fqdn.raw: The Manager's FQDN hostname: The host's FQDN ipaddr4: IP address interval: 10 2.5.15. Gauge and Derive Data Source Types Each metric includes a collectd.dstypes value that defines the data source's type: Gauge : A gauge value is simply stored as-is and is used for values that may increase or decrease, such as the amount of memory used. Derive : These data sources assume that the change of the value is interesting, i.e., the derivative. Such data sources are very common for events that can be counted, for example the number of disk read operations. The total number of disk read operations is not interesting, but rather the change since the value was last read. The value is therefore converted to a rate using the following formula: Note If value(new) is less than value (old), the resulting rate will be negative. If the minimum value to zero, such data points will be discarded. 2.6. Working with Metrics Store Indexes Metrics Store creates the following two indexes per day: project.ovirt-metrics-<ovirt-env-name>.uuid.yyyy.mm.dd project.ovirt-logs-<ovirt-env-name>.uuid.yyyy.mm.dd When using the Discover page, select the index named project.ovirt-logs-<ovirt-env-name>.uuid. In the Visualization page select project.ovirt-metrics-<ovirt-env-name>.uuid for metrics data or project.ovirt-logs-<ovirt-env-name>.uuid for log data. | [
"rate = value(new)-value(old) time(new)-time(old)"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/metrics_store_user_guide/Metrics |
Chapter 2. Deploying Quarkus applications | Chapter 2. Deploying Quarkus applications You can deploy your Quarkus application on OpenShift by using any of the following build strategies: Docker build S2I Binary Source S2I For more details about each of these build strategies, see Chapter 1. OpenShift build strategies and Quarkus of the Deploying your Quarkus applications to OpenShift Container Platform guide. Note The OpenShift Docker build strategy is the preferred build strategy that supports Quarkus applications targeted for JVM as well as Quarkus applications compiled to native executables. You can configure the deployment strategy using the quarkus.openshift.build-strategy property. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/getting_started_with_red_hat_build_of_apache_camel_for_quarkus/deploying-quarkus-applications |
Chapter 7. Troubleshooting | Chapter 7. Troubleshooting 7.1. Troubleshooting installations 7.1.1. Determining where installation issues occur When troubleshooting OpenShift Container Platform installation issues, you can monitor installation logs to determine at which stage issues occur. Then, retrieve diagnostic data relevant to that stage. OpenShift Container Platform installation proceeds through the following stages: Ignition configuration files are created. The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot. The control plane machines fetch the remote resources from the bootstrap machine and finish booting. The control plane machines use the bootstrap machine to form an etcd cluster. The bootstrap machine starts a temporary Kubernetes control plane using the new etcd cluster. The temporary control plane schedules the production control plane to the control plane machines. The temporary control plane shuts down and passes control to the production control plane. The bootstrap machine adds OpenShift Container Platform components into the production control plane. The installation program shuts down the bootstrap machine. The control plane sets up the worker nodes. The control plane installs additional services in the form of a set of Operators. The cluster downloads and configures remaining components needed for the day-to-day operation, including the creation of worker machines in supported environments. 7.1.2. User-provisioned infrastructure installation considerations The default installation method uses installer-provisioned infrastructure. With installer-provisioned infrastructure clusters, OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. If possible, use this feature to avoid having to provision and maintain the cluster infrastructure. You can alternatively install OpenShift Container Platform 4.15 on infrastructure that you provide. If you use this installation method, follow user-provisioned infrastructure installation documentation carefully. Additionally, review the following considerations before the installation: Check the Red Hat Enterprise Linux (RHEL) Ecosystem to determine the level of Red Hat Enterprise Linux CoreOS (RHCOS) support provided for your chosen server hardware or virtualization technology. Many virtualization and cloud environments require agents to be installed on guest operating systems. Ensure that these agents are installed as a containerized workload deployed through a daemon set. Install cloud provider integration if you want to enable features such as dynamic storage, on-demand service routing, node hostname to Kubernetes hostname resolution, and cluster autoscaling. Note It is not possible to enable cloud provider integration in OpenShift Container Platform environments that mix resources from different cloud providers, or that span multiple physical or virtual platforms. The node life cycle controller will not allow nodes that are external to the existing provider to be added to a cluster, and it is not possible to specify more than one cloud provider integration. A provider-specific Machine API implementation is required if you want to use machine sets or autoscaling to automatically provision OpenShift Container Platform cluster nodes. Check whether your chosen cloud provider offers a method to inject Ignition configuration files into hosts as part of their initial deployment. If they do not, you will need to host Ignition configuration files by using an HTTP server. The steps taken to troubleshoot Ignition configuration file issues will differ depending on which of these two methods is deployed. Storage needs to be manually provisioned if you want to leverage optional framework components such as the embedded container registry, Elasticsearch, or Prometheus. Default storage classes are not defined in user-provisioned infrastructure installations unless explicitly configured. A load balancer is required to distribute API requests across all control plane nodes in highly available OpenShift Container Platform environments. You can use any TCP-based load balancing solution that meets OpenShift Container Platform DNS routing and port requirements. 7.1.3. Checking a load balancer configuration before OpenShift Container Platform installation Check your load balancer configuration prior to starting an OpenShift Container Platform installation. Prerequisites You have configured an external load balancer of your choosing, in preparation for an OpenShift Container Platform installation. The following example is based on a Red Hat Enterprise Linux (RHEL) host using HAProxy to provide load balancing services to a cluster. You have configured DNS in preparation for an OpenShift Container Platform installation. You have SSH access to your load balancer. Procedure Check that the haproxy systemd service is active: USD ssh <user_name>@<load_balancer> systemctl status haproxy Verify that the load balancer is listening on the required ports. The following example references ports 80 , 443 , 6443 , and 22623 . For HAProxy instances running on Red Hat Enterprise Linux (RHEL) 6, verify port status by using the netstat command: USD ssh <user_name>@<load_balancer> netstat -nltupe | grep -E ':80|:443|:6443|:22623' For HAProxy instances running on Red Hat Enterprise Linux (RHEL) 7 or 8, verify port status by using the ss command: USD ssh <user_name>@<load_balancer> ss -nltupe | grep -E ':80|:443|:6443|:22623' Note Red Hat recommends the ss command instead of netstat in Red Hat Enterprise Linux (RHEL) 7 or later. ss is provided by the iproute package. For more information on the ss command, see the Red Hat Enterprise Linux (RHEL) 7 Performance Tuning Guide . Check that the wildcard DNS record resolves to the load balancer: USD dig <wildcard_fqdn> @<dns_server> 7.1.4. Specifying OpenShift Container Platform installer log levels By default, the OpenShift Container Platform installer log level is set to info . If more detailed logging is required when diagnosing a failed OpenShift Container Platform installation, you can increase the openshift-install log level to debug when starting the installation again. Prerequisites You have access to the installation host. Procedure Set the installation log level to debug when initiating the installation: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete --log-level debug 1 1 Possible log levels include info , warn , error, and debug . 7.1.5. Troubleshooting openshift-install command issues If you experience issues running the openshift-install command, check the following: The installation has been initiated within 24 hours of Ignition configuration file creation. The Ignition files are created when the following command is run: USD ./openshift-install create ignition-configs --dir=./install_dir The install-config.yaml file is in the same directory as the installer. If an alternative installation path is declared by using the ./openshift-install --dir option, verify that the install-config.yaml file exists within that directory. 7.1.6. Monitoring installation progress You can monitor high-level installation, bootstrap, and control plane logs as an OpenShift Container Platform installation progresses. This provides greater visibility into how an installation progresses and helps identify the stage at which an installation failure occurs. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). You have SSH access to your hosts. You have the fully qualified domain names of the bootstrap and control plane nodes. Note The initial kubeadmin password can be found in <install_directory>/auth/kubeadmin-password on the installation host. Procedure Watch the installation log as the installation progresses: USD tail -f ~/<installation_directory>/.openshift_install.log Monitor the bootkube.service journald unit log on the bootstrap node, after it has booted. This provides visibility into the bootstrapping of the first control plane. Replace <bootstrap_fqdn> with the bootstrap node's fully qualified domain name: USD ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service Note The bootkube.service log on the bootstrap node outputs etcd connection refused errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop. Monitor kubelet.service journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node agent activity. Monitor the logs using oc : USD oc adm node-logs --role=master -u kubelet If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values: USD ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service Monitor crio.service journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node CRI-O container runtime activity. Monitor the logs using oc : USD oc adm node-logs --role=master -u crio If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values: USD ssh [email protected]_name.sub_domain.domain journalctl -b -f -u crio.service 7.1.7. Gathering bootstrap node diagnostic data When experiencing bootstrap-related issues, you can gather bootkube.service journald unit logs and container logs from the bootstrap node. Prerequisites You have SSH access to your bootstrap node. You have the fully qualified domain name of the bootstrap node. If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP server's fully qualified domain name and the port number. You must also have SSH access to the HTTP host. Procedure If you have access to the bootstrap node's console, monitor the console until the node reaches the login prompt. Verify the Ignition file configuration. If you are hosting Ignition configuration files by using an HTTP server. Verify the bootstrap node Ignition file URL. Replace <http_server_fqdn> with HTTP server's fully qualified domain name: USD curl -I http://<http_server_fqdn>:<port>/bootstrap.ign 1 1 The -I option returns the header only. If the Ignition file is available on the specified URL, the command returns 200 OK status. If it is not available, the command returns 404 file not found . To verify that the Ignition file was received by the bootstrap node, query the HTTP server logs on the serving host. For example, if you are using an Apache web server to serve Ignition files, enter the following command: USD grep -is 'bootstrap.ign' /var/log/httpd/access_log If the bootstrap Ignition file is received, the associated HTTP GET log message will include a 200 OK success status, indicating that the request succeeded. If the Ignition file was not received, check that the Ignition files exist and that they have the appropriate file and web server permissions on the serving host directly. If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as part of their initial deployment. Review the bootstrap node's console to determine if the mechanism is injecting the bootstrap node Ignition file correctly. Verify the availability of the bootstrap node's assigned storage device. Verify that the bootstrap node has been assigned an IP address from the DHCP server. Collect bootkube.service journald unit logs from the bootstrap node. Replace <bootstrap_fqdn> with the bootstrap node's fully qualified domain name: USD ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service Note The bootkube.service log on the bootstrap node outputs etcd connection refused errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop. Collect logs from the bootstrap node containers. Collect the logs using podman on the bootstrap node. Replace <bootstrap_fqdn> with the bootstrap node's fully qualified domain name: USD ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done' If the bootstrap process fails, verify the following. You can resolve api.<cluster_name>.<base_domain> from the installation host. The load balancer proxies port 6443 connections to bootstrap and control plane nodes. Ensure that the proxy configuration meets OpenShift Container Platform installation requirements. 7.1.8. Investigating control plane node installation issues If you experience control plane node installation issues, determine the control plane node OpenShift Container Platform software defined network (SDN), and network Operator status. Collect kubelet.service , crio.service journald unit logs, and control plane node container logs for visibility into control plane node agent, CRI-O container runtime, and pod activity. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have SSH access to your hosts. You have the fully qualified domain names of the bootstrap and control plane nodes. If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP server's fully qualified domain name and the port number. You must also have SSH access to the HTTP host. Note The initial kubeadmin password can be found in <install_directory>/auth/kubeadmin-password on the installation host. Procedure If you have access to the console for the control plane node, monitor the console until the node reaches the login prompt. During the installation, Ignition log messages are output to the console. Verify Ignition file configuration. If you are hosting Ignition configuration files by using an HTTP server. Verify the control plane node Ignition file URL. Replace <http_server_fqdn> with HTTP server's fully qualified domain name: USD curl -I http://<http_server_fqdn>:<port>/master.ign 1 1 The -I option returns the header only. If the Ignition file is available on the specified URL, the command returns 200 OK status. If it is not available, the command returns 404 file not found . To verify that the Ignition file was received by the control plane node query the HTTP server logs on the serving host. For example, if you are using an Apache web server to serve Ignition files: USD grep -is 'master.ign' /var/log/httpd/access_log If the master Ignition file is received, the associated HTTP GET log message will include a 200 OK success status, indicating that the request succeeded. If the Ignition file was not received, check that it exists on the serving host directly. Ensure that the appropriate file and web server permissions are in place. If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as part of their initial deployment. Review the console for the control plane node to determine if the mechanism is injecting the control plane node Ignition file correctly. Check the availability of the storage device assigned to the control plane node. Verify that the control plane node has been assigned an IP address from the DHCP server. Determine control plane node status. Query control plane node status: USD oc get nodes If one of the control plane nodes does not reach a Ready status, retrieve a detailed node description: USD oc describe node <master_node> Note It is not possible to run oc commands if an installation issue prevents the OpenShift Container Platform API from running or if the kubelet is not running yet on each node: Determine OpenShift Container Platform SDN status. Review sdn-controller , sdn , and ovs daemon set status, in the openshift-sdn namespace: USD oc get daemonsets -n openshift-sdn If those resources are listed as Not found , review pods in the openshift-sdn namespace: USD oc get pods -n openshift-sdn Review logs relating to failed OpenShift Container Platform SDN pods in the openshift-sdn namespace: USD oc logs <sdn_pod> -n openshift-sdn Determine cluster network configuration status. Review whether the cluster's network configuration exists: USD oc get network.config.openshift.io cluster -o yaml If the installer failed to create the network configuration, generate the Kubernetes manifests again and review message output: USD ./openshift-install create manifests Review the pod status in the openshift-network-operator namespace to determine whether the Cluster Network Operator (CNO) is running: USD oc get pods -n openshift-network-operator Gather network Operator pod logs from the openshift-network-operator namespace: USD oc logs pod/<network_operator_pod_name> -n openshift-network-operator Monitor kubelet.service journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node agent activity. Retrieve the logs using oc : USD oc adm node-logs --role=master -u kubelet If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values: USD ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service Note OpenShift Container Platform 4.15 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . Retrieve crio.service journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node CRI-O container runtime activity. Retrieve the logs using oc : USD oc adm node-logs --role=master -u crio If the API is not functional, review the logs using SSH instead: USD ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service Collect logs from specific subdirectories under /var/log/ on control plane nodes. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists files in /var/log/openshift-apiserver/ on all control plane nodes: USD oc adm node-logs --role=master --path=openshift-apiserver Inspect a specific log within a /var/log/ subdirectory. The following example outputs /var/log/openshift-apiserver/audit.log contents from all control plane nodes: USD oc adm node-logs --role=master --path=openshift-apiserver/audit.log If the API is not functional, review the logs on each node using SSH instead. The following example tails /var/log/openshift-apiserver/audit.log : USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log Review control plane node container logs using SSH. List the containers: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps -a Retrieve a container's logs using crictl : USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id> If you experience control plane node configuration issues, verify that the MCO, MCO endpoint, and DNS record are functioning. The Machine Config Operator (MCO) manages operating system configuration during the installation procedure. Also verify system clock accuracy and certificate validity. Test whether the MCO endpoint is available. Replace <cluster_name> with appropriate values: USD curl https://api-int.<cluster_name>:22623/config/master If the endpoint is unresponsive, verify load balancer configuration. Ensure that the endpoint is configured to run on port 22623. Verify that the MCO endpoint's DNS record is configured and resolves to the load balancer. Run a DNS lookup for the defined MCO endpoint name: USD dig api-int.<cluster_name> @<dns_server> Run a reverse lookup to the assigned MCO IP address on the load balancer: USD dig -x <load_balancer_mco_ip_address> @<dns_server> Verify that the MCO is functioning from the bootstrap node directly. Replace <bootstrap_fqdn> with the bootstrap node's fully qualified domain name: USD ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/master System clock time must be synchronized between bootstrap, master, and worker nodes. Check each node's system clock reference time and time synchronization statistics: USD ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking Review certificate validity: USD openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text 7.1.9. Investigating etcd installation issues If you experience etcd issues during installation, you can check etcd pod status and collect etcd pod logs. You can also verify etcd DNS records and check DNS availability on control plane nodes. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have SSH access to your hosts. You have the fully qualified domain names of the control plane nodes. Procedure Check the status of etcd pods. Review the status of pods in the openshift-etcd namespace: USD oc get pods -n openshift-etcd Review the status of pods in the openshift-etcd-operator namespace: USD oc get pods -n openshift-etcd-operator If any of the pods listed by the commands are not showing a Running or a Completed status, gather diagnostic information for the pod. Review events for the pod: USD oc describe pod/<pod_name> -n <namespace> Inspect the pod's logs: USD oc logs pod/<pod_name> -n <namespace> If the pod has more than one container, the preceding command will create an error, and the container names will be provided in the error message. Inspect logs for each container: USD oc logs pod/<pod_name> -c <container_name> -n <namespace> If the API is not functional, review etcd pod and container logs on each control plane node by using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values. List etcd pods on each control plane node: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods --name=etcd- For any pods not showing Ready status, inspect pod status in detail. Replace <pod_id> with the pod's ID listed in the output of the preceding command: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <pod_id> List containers related to a pod: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps | grep '<pod_id>' For any containers not showing Ready status, inspect container status in detail. Replace <container_id> with container IDs listed in the output of the preceding command: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id> Review the logs for any containers not showing a Ready status. Replace <container_id> with the container IDs listed in the output of the preceding command: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id> Note OpenShift Container Platform 4.15 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . Validate primary and secondary DNS server connectivity from control plane nodes. 7.1.10. Investigating control plane node kubelet and API server issues To investigate control plane node kubelet and API server issues during installation, check DNS, DHCP, and load balancer functionality. Also, verify that certificates have not expired. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have SSH access to your hosts. You have the fully qualified domain names of the control plane nodes. Procedure Verify that the API server's DNS record directs the kubelet on control plane nodes to https://api-int.<cluster_name>.<base_domain>:6443 . Ensure that the record references the load balancer. Ensure that the load balancer's port 6443 definition references each control plane node. Check that unique control plane node hostnames have been provided by DHCP. Inspect the kubelet.service journald unit logs on each control plane node. Retrieve the logs using oc : USD oc adm node-logs --role=master -u kubelet If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values: USD ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service Note OpenShift Container Platform 4.15 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . Check for certificate expiration messages in the control plane node kubelet logs. Retrieve the log using oc : USD oc adm node-logs --role=master -u kubelet | grep -is 'x509: certificate has expired' If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values: USD ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service | grep -is 'x509: certificate has expired' 7.1.11. Investigating worker node installation issues If you experience worker node installation issues, you can review the worker node status. Collect kubelet.service , crio.service journald unit logs and the worker node container logs for visibility into the worker node agent, CRI-O container runtime and pod activity. Additionally, you can check the Ignition file and Machine API Operator functionality. If worker node postinstallation configuration fails, check Machine Config Operator (MCO) and DNS functionality. You can also verify system clock synchronization between the bootstrap, master, and worker nodes, and validate certificates. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have SSH access to your hosts. You have the fully qualified domain names of the bootstrap and worker nodes. If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP server's fully qualified domain name and the port number. You must also have SSH access to the HTTP host. Note The initial kubeadmin password can be found in <install_directory>/auth/kubeadmin-password on the installation host. Procedure If you have access to the worker node's console, monitor the console until the node reaches the login prompt. During the installation, Ignition log messages are output to the console. Verify Ignition file configuration. If you are hosting Ignition configuration files by using an HTTP server. Verify the worker node Ignition file URL. Replace <http_server_fqdn> with HTTP server's fully qualified domain name: USD curl -I http://<http_server_fqdn>:<port>/worker.ign 1 1 The -I option returns the header only. If the Ignition file is available on the specified URL, the command returns 200 OK status. If it is not available, the command returns 404 file not found . To verify that the Ignition file was received by the worker node, query the HTTP server logs on the HTTP host. For example, if you are using an Apache web server to serve Ignition files: USD grep -is 'worker.ign' /var/log/httpd/access_log If the worker Ignition file is received, the associated HTTP GET log message will include a 200 OK success status, indicating that the request succeeded. If the Ignition file was not received, check that it exists on the serving host directly. Ensure that the appropriate file and web server permissions are in place. If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as part of their initial deployment. Review the worker node's console to determine if the mechanism is injecting the worker node Ignition file correctly. Check the availability of the worker node's assigned storage device. Verify that the worker node has been assigned an IP address from the DHCP server. Determine worker node status. Query node status: USD oc get nodes Retrieve a detailed node description for any worker nodes not showing a Ready status: USD oc describe node <worker_node> Note It is not possible to run oc commands if an installation issue prevents the OpenShift Container Platform API from running or if the kubelet is not running yet on each node. Unlike control plane nodes, worker nodes are deployed and scaled using the Machine API Operator. Check the status of the Machine API Operator. Review Machine API Operator pod status: USD oc get pods -n openshift-machine-api If the Machine API Operator pod does not have a Ready status, detail the pod's events: USD oc describe pod/<machine_api_operator_pod_name> -n openshift-machine-api Inspect machine-api-operator container logs. The container runs within the machine-api-operator pod: USD oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c machine-api-operator Also inspect kube-rbac-proxy container logs. The container also runs within the machine-api-operator pod: USD oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c kube-rbac-proxy Monitor kubelet.service journald unit logs on worker nodes, after they have booted. This provides visibility into worker node agent activity. Retrieve the logs using oc : USD oc adm node-logs --role=worker -u kubelet If the API is not functional, review the logs using SSH instead. Replace <worker-node>.<cluster_name>.<base_domain> with appropriate values: USD ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service Note OpenShift Container Platform 4.15 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . Retrieve crio.service journald unit logs on worker nodes, after they have booted. This provides visibility into worker node CRI-O container runtime activity. Retrieve the logs using oc : USD oc adm node-logs --role=worker -u crio If the API is not functional, review the logs using SSH instead: USD ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service Collect logs from specific subdirectories under /var/log/ on worker nodes. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists files in /var/log/sssd/ on all worker nodes: USD oc adm node-logs --role=worker --path=sssd Inspect a specific log within a /var/log/ subdirectory. The following example outputs /var/log/sssd/sssd.log contents from all worker nodes: USD oc adm node-logs --role=worker --path=sssd/sssd.log If the API is not functional, review the logs on each node using SSH instead. The following example tails /var/log/sssd/sssd.log : USD ssh core@<worker-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/sssd/sssd.log Review worker node container logs using SSH. List the containers: USD ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl ps -a Retrieve a container's logs using crictl : USD ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id> If you experience worker node configuration issues, verify that the MCO, MCO endpoint, and DNS record are functioning. The Machine Config Operator (MCO) manages operating system configuration during the installation procedure. Also verify system clock accuracy and certificate validity. Test whether the MCO endpoint is available. Replace <cluster_name> with appropriate values: USD curl https://api-int.<cluster_name>:22623/config/worker If the endpoint is unresponsive, verify load balancer configuration. Ensure that the endpoint is configured to run on port 22623. Verify that the MCO endpoint's DNS record is configured and resolves to the load balancer. Run a DNS lookup for the defined MCO endpoint name: USD dig api-int.<cluster_name> @<dns_server> Run a reverse lookup to the assigned MCO IP address on the load balancer: USD dig -x <load_balancer_mco_ip_address> @<dns_server> Verify that the MCO is functioning from the bootstrap node directly. Replace <bootstrap_fqdn> with the bootstrap node's fully qualified domain name: USD ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/worker System clock time must be synchronized between bootstrap, master, and worker nodes. Check each node's system clock reference time and time synchronization statistics: USD ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking Review certificate validity: USD openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text 7.1.12. Querying Operator status after installation You can check Operator status at the end of an installation. Retrieve diagnostic data for Operators that do not become available. Review logs for any Operator pods that are listed as Pending or have an error status. Validate base images used by problematic pods. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Check that cluster Operators are all available at the end of an installation. USD oc get clusteroperators Verify that all of the required certificate signing requests (CSRs) are approved. Some nodes might not move to a Ready status and some cluster Operators might not become available if there are pending CSRs. Check the status of the CSRs and ensure that you see a client and server request with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 1 csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending 2 csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... 1 A client request CSR. 2 A server request CSR. In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After you approve the initial CSRs, the subsequent node client CSRs are automatically approved by the cluster kube-controller-manager . Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve View Operator events: USD oc describe clusteroperator <operator_name> Review Operator pod status within the Operator's namespace: USD oc get pods -n <operator_namespace> Obtain a detailed description for pods that do not have Running status: USD oc describe pod/<operator_pod_name> -n <operator_namespace> Inspect pod logs: USD oc logs pod/<operator_pod_name> -n <operator_namespace> When experiencing pod base image related issues, review base image status. Obtain details of the base image used by a problematic pod: USD oc get pod -o "jsonpath={range .status.containerStatuses[*]}{.name}{'\t'}{.state}{'\t'}{.image}{'\n'}{end}" <operator_pod_name> -n <operator_namespace> List base image release information: USD oc adm release info <image_path>:<tag> --commits 7.1.13. Gathering logs from a failed installation If you gave an SSH key to your installation program, you can gather data about your failed installation. Note You use a different command to gather logs about an unsuccessful installation than to gather logs from a running cluster. If you must gather logs from a running cluster, use the oc adm must-gather command. Prerequisites Your OpenShift Container Platform installation failed before the bootstrap process finished. The bootstrap node is running and accessible through SSH. The ssh-agent process is active on your computer, and you provided the same SSH key to both the ssh-agent process and the installation program. If you tried to install a cluster on infrastructure that you provisioned, you must have the fully qualified domain names of the bootstrap and control plane nodes. Procedure Generate the commands that are required to obtain the installation logs from the bootstrap and control plane machines: If you used installer-provisioned infrastructure, change to the directory that contains the installation program and run the following command: USD ./openshift-install gather bootstrap --dir <installation_directory> 1 1 installation_directory is the directory you specified when you ran ./openshift-install create cluster . This directory contains the OpenShift Container Platform definition files that the installation program creates. For installer-provisioned infrastructure, the installation program stores information about the cluster, so you do not specify the hostnames or IP addresses. If you used infrastructure that you provisioned yourself, change to the directory that contains the installation program and run the following command: USD ./openshift-install gather bootstrap --dir <installation_directory> \ 1 --bootstrap <bootstrap_address> \ 2 --master <master_1_address> \ 3 --master <master_2_address> \ 4 --master <master_3_address> 5 1 For installation_directory , specify the same directory you specified when you ran ./openshift-install create cluster . This directory contains the OpenShift Container Platform definition files that the installation program creates. 2 <bootstrap_address> is the fully qualified domain name or IP address of the cluster's bootstrap machine. 3 4 5 For each control plane, or master, machine in your cluster, replace <master_*_address> with its fully qualified domain name or IP address. Note A default cluster contains three control plane machines. List all of your control plane machines as shown, no matter how many your cluster uses. Example output INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here "<installation_directory>/log-bundle-<timestamp>.tar.gz" If you open a Red Hat support case about your installation failure, include the compressed logs in the case. 7.1.14. Additional resources See Installation process for more details on OpenShift Container Platform installation types and process. 7.2. Verifying node health 7.2.1. Reviewing node status, resource usage, and configuration Review cluster node health status, resource consumption statistics, and node logs. Additionally, query kubelet status on individual nodes. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List the name, status, and role for all nodes in the cluster: USD oc get nodes Summarize CPU and memory usage for each node within the cluster: USD oc adm top nodes Summarize CPU and memory usage for a specific node: USD oc adm top node my-node 7.2.2. Querying the kubelet's status on a node You can review cluster node health status, resource consumption statistics, and node logs. Additionally, you can query kubelet status on individual nodes. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure The kubelet is managed using a systemd service on each node. Review the kubelet's status by querying the kubelet systemd service within a debug pod. Start a debug pod for a node: USD oc debug node/my-node Note If you are running oc debug on a control plane node, you can find administrative kubeconfig files in the /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs directory. Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Note OpenShift Container Platform 4.15 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. Check whether the kubelet systemd service is active on the node: # systemctl is-active kubelet Output a more detailed kubelet.service status summary: # systemctl status kubelet 7.2.3. Querying cluster node journal logs You can gather journald unit logs and other logs within /var/log on individual cluster nodes. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Your API service is still functional. You have SSH access to your hosts. Procedure Query kubelet journald unit logs from OpenShift Container Platform cluster nodes. The following example queries control plane nodes only: USD oc adm node-logs --role=master -u kubelet 1 1 Replace kubelet as appropriate to query other unit logs. Collect logs from specific subdirectories under /var/log/ on cluster nodes. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists files in /var/log/openshift-apiserver/ on all control plane nodes: USD oc adm node-logs --role=master --path=openshift-apiserver Inspect a specific log within a /var/log/ subdirectory. The following example outputs /var/log/openshift-apiserver/audit.log contents from all control plane nodes: USD oc adm node-logs --role=master --path=openshift-apiserver/audit.log If the API is not functional, review the logs on each node using SSH instead. The following example tails /var/log/openshift-apiserver/audit.log : USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log Note OpenShift Container Platform 4.15 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . 7.3. Troubleshooting CRI-O container runtime issues 7.3.1. About CRI-O container runtime engine CRI-O is a Kubernetes-native container engine implementation that integrates closely with the operating system to deliver an efficient and optimized Kubernetes experience. The CRI-O container engine runs as a systemd service on each OpenShift Container Platform cluster node. When container runtime issues occur, verify the status of the crio systemd service on each node. Gather CRI-O journald unit logs from nodes that have container runtime issues. 7.3.2. Verifying CRI-O runtime engine status You can verify CRI-O container runtime engine status on each cluster node. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Review CRI-O status by querying the crio systemd service on a node, within a debug pod. Start a debug pod for a node: USD oc debug node/my-node Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Note OpenShift Container Platform 4.15 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. Check whether the crio systemd service is active on the node: # systemctl is-active crio Output a more detailed crio.service status summary: # systemctl status crio.service 7.3.3. Gathering CRI-O journald unit logs If you experience CRI-O issues, you can obtain CRI-O journald unit logs from a node. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). You have the fully qualified domain names of the control plane or control plane machines. Procedure Gather CRI-O journald unit logs. The following example collects logs from all control plane nodes (within the cluster: USD oc adm node-logs --role=master -u crio Gather CRI-O journald unit logs from a specific node: USD oc adm node-logs <node_name> -u crio If the API is not functional, review the logs using SSH instead. Replace <node>.<cluster_name>.<base_domain> with appropriate values: USD ssh core@<node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service Note OpenShift Container Platform 4.15 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . 7.3.4. Cleaning CRI-O storage You can manually clear the CRI-O ephemeral storage if you experience the following issues: A node cannot run any pods and this error appears: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to mount container XXX: error recreating the missing symlinks: error reading name of symlink for XXX: open /var/lib/containers/storage/overlay/XXX/link: no such file or directory You cannot create a new container on a working node and the "can't stat lower layer" error appears: can't stat lower layer ... because it does not exist. Going through storage to recreate the missing symlinks. Your node is in the NotReady state after a cluster upgrade or if you attempt to reboot it. The container runtime implementation ( crio ) is not working properly. You are unable to start a debug shell on the node using oc debug node/<node_name> because the container runtime instance ( crio ) is not working. Follow this process to completely wipe the CRI-O storage and resolve the errors. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Use cordon on the node. This is to avoid any workload getting scheduled if the node gets into the Ready status. You will know that scheduling is disabled when SchedulingDisabled is in your Status section: USD oc adm cordon <node_name> Drain the node as the cluster-admin user: USD oc adm drain <node_name> --ignore-daemonsets --delete-emptydir-data Note The terminationGracePeriodSeconds attribute of a pod or pod template controls the graceful termination period. This attribute defaults at 30 seconds, but can be customized for each application as necessary. If set to more than 90 seconds, the pod might be marked as SIGKILLed and fail to terminate successfully. When the node returns, connect back to the node via SSH or Console. Then connect to the root user: USD ssh [email protected] USD sudo -i Manually stop the kubelet: # systemctl stop kubelet Stop the containers and pods: Use the following command to stop the pods that are not in the HostNetwork . They must be removed first because their removal relies on the networking plugin pods, which are in the HostNetwork . .. for pod in USD(crictl pods -q); do if [[ "USD(crictl inspectp USDpod | jq -r .status.linux.namespaces.options.network)" != "NODE" ]]; then crictl rmp -f USDpod; fi; done Stop all other pods: # crictl rmp -fa Manually stop the crio services: # systemctl stop crio After you run those commands, you can completely wipe the ephemeral storage: # crio wipe -f Start the crio and kubelet service: # systemctl start crio # systemctl start kubelet You will know if the clean up worked if the crio and kubelet services are started, and the node is in the Ready status: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready, SchedulingDisabled master 133m v1.28.5 Mark the node schedulable. You will know that the scheduling is enabled when SchedulingDisabled is no longer in status: USD oc adm uncordon <node_name> Example output NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready master 133m v1.28.5 7.4. Troubleshooting operating system issues OpenShift Container Platform runs on RHCOS. You can follow these procedures to troubleshoot problems related to the operating system. 7.4.1. Investigating kernel crashes The kdump service, included in the kexec-tools package, provides a crash-dumping mechanism. You can use this service to save the contents of a system's memory for later analysis. The x86_64 architecture supports kdump in General Availability (GA) status, whereas other architectures support kdump in Technology Preview (TP) status. The following table provides details about the support level of kdump for different architectures. Table 7.1. Kdump support in RHCOS Architecture Support level x86_64 GA aarch64 TP s390x TP ppc64le TP Important Kdump support, for the preceding three architectures in the table, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 7.4.1.1. Enabling kdump RHCOS ships with the kexec-tools package, but manual configuration is required to enable the kdump service. Procedure Perform the following steps to enable kdump on RHCOS. To reserve memory for the crash kernel during the first kernel booting, provide kernel arguments by entering the following command: # rpm-ostree kargs --append='crashkernel=256M' Note For the ppc64le platform, the recommended value for crashkernel is crashkernel=2G-4G:384M,4G-16G:512M,16G-64G:1G,64G-128G:2G,128G-:4G . Optional: To write the crash dump over the network or to some other location, rather than to the default local /var/crash location, edit the /etc/kdump.conf configuration file. Note If your node uses LUKS-encrypted devices, you must use network dumps as kdump does not support saving crash dumps to LUKS-encrypted devices. For details on configuring the kdump service, see the comments in /etc/sysconfig/kdump , /etc/kdump.conf , and the kdump.conf manual page. Also refer to the RHEL kdump documentation for further information on configuring the dump target. Important If you have multipathing enabled on your primary disk, the dump target must be either an NFS or SSH server and you must exclude the multipath module from your /etc/kdump.conf configuration file. Enable the kdump systemd service. # systemctl enable kdump.service Reboot your system. # systemctl reboot Ensure that kdump has loaded a crash kernel by checking that the kdump.service systemd service has started and exited successfully and that the command, cat /sys/kernel/kexec_crash_loaded , prints the value 1 . 7.4.1.2. Enabling kdump on day-1 The kdump service is intended to be enabled per node to debug kernel problems. Because there are costs to having kdump enabled, and these costs accumulate with each additional kdump-enabled node, it is recommended that the kdump service only be enabled on each node as needed. Potential costs of enabling the kdump service on each node include: Less available RAM due to memory being reserved for the crash kernel. Node unavailability while the kernel is dumping the core. Additional storage space being used to store the crash dumps. If you are aware of the downsides and trade-offs of having the kdump service enabled, it is possible to enable kdump in a cluster-wide fashion. Although machine-specific machine configs are not yet supported, you can use a systemd unit in a MachineConfig object as a day-1 customization and have kdump enabled on all nodes in the cluster. You can create a MachineConfig object and inject that object into the set of manifest files used by Ignition during cluster setup. Note See "Customizing nodes" in the Installing Installation configuration section for more information and examples on how to use Ignition configs. Procedure Create a MachineConfig object for cluster-wide configuration: Create a Butane config file, 99-worker-kdump.bu , that configures and enables kdump: variant: openshift version: 4.15.0 metadata: name: 99-worker-kdump 1 labels: machineconfiguration.openshift.io/role: worker 2 openshift: kernel_arguments: 3 - crashkernel=256M storage: files: - path: /etc/kdump.conf 4 mode: 0644 overwrite: true contents: inline: | path /var/crash core_collector makedumpfile -l --message-level 7 -d 31 - path: /etc/sysconfig/kdump 5 mode: 0644 overwrite: true contents: inline: | KDUMP_COMMANDLINE_REMOVE="hugepages hugepagesz slub_debug quiet log_buf_len swiotlb" KDUMP_COMMANDLINE_APPEND="irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 rootflags=nofail acpi_no_memhotplug transparent_hugepage=never nokaslr novmcoredd hest_disable" 6 KEXEC_ARGS="-s" KDUMP_IMG="vmlinuz" systemd: units: - name: kdump.service enabled: true 1 2 Replace worker with master in both locations when creating a MachineConfig object for control plane nodes. 3 Provide kernel arguments to reserve memory for the crash kernel. You can add other kernel arguments if necessary. For the ppc64le platform, the recommended value for crashkernel is crashkernel=2G-4G:384M,4G-16G:512M,16G-64G:1G,64G-128G:2G,128G-:4G . 4 If you want to change the contents of /etc/kdump.conf from the default, include this section and modify the inline subsection accordingly. 5 If you want to change the contents of /etc/sysconfig/kdump from the default, include this section and modify the inline subsection accordingly. 6 For the ppc64le platform, replace nr_cpus=1 with maxcpus=1 , which is not supported on this platform. Note To export the dumps to NFS targets, the nfs kernel module must be explicitly added to the configuration file: Example /etc/kdump.conf file nfs server.example.com:/export/cores core_collector makedumpfile -l --message-level 7 -d 31 extra_modules nfs Use Butane to generate a machine config YAML file, 99-worker-kdump.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-kdump.bu -o 99-worker-kdump.yaml Put the YAML file into the <installation_directory>/manifests/ directory during cluster setup. You can also create this MachineConfig object after cluster setup with the YAML file: USD oc create -f 99-worker-kdump.yaml 7.4.1.3. Testing the kdump configuration See the Testing the kdump configuration section in the RHEL documentation for kdump. 7.4.1.4. Analyzing a core dump See the Analyzing a core dump section in the RHEL documentation for kdump. Note It is recommended to perform vmcore analysis on a separate RHEL system. Additional resources Setting up kdump in RHEL Linux kernel documentation for kdump kdump.conf(5) - a manual page for the /etc/kdump.conf configuration file containing the full documentation of available options kexec(8) - a manual page for the kexec package Red Hat Knowledgebase article regarding kexec and kdump 7.4.2. Debugging Ignition failures If a machine cannot be provisioned, Ignition fails and RHCOS will boot into the emergency shell. Use the following procedure to get debugging information. Procedure Run the following command to show which service units failed: USD systemctl --failed Optional: Run the following command on an individual service unit to find out more information: USD journalctl -u <unit>.service 7.5. Troubleshooting network issues 7.5.1. How the network interface is selected For installations on bare metal or with virtual machines that have more than one network interface controller (NIC), the NIC that OpenShift Container Platform uses for communication with the Kubernetes API server is determined by the nodeip-configuration.service service unit that is run by systemd when the node boots. The nodeip-configuration.service selects the IP from the interface associated with the default route. After the nodeip-configuration.service service determines the correct NIC, the service creates the /etc/systemd/system/kubelet.service.d/20-nodenet.conf file. The 20-nodenet.conf file sets the KUBELET_NODE_IP environment variable to the IP address that the service selected. When the kubelet service starts, it reads the value of the environment variable from the 20-nodenet.conf file and sets the IP address as the value of the --node-ip kubelet command-line argument. As a result, the kubelet service uses the selected IP address as the node IP address. If hardware or networking is reconfigured after installation, or if there is a networking layout where the node IP should not come from the default route interface, it is possible for the nodeip-configuration.service service to select a different NIC after a reboot. In some cases, you might be able to detect that a different NIC is selected by reviewing the INTERNAL-IP column in the output from the oc get nodes -o wide command. If network communication is disrupted or misconfigured because a different NIC is selected, you might receive the following error: EtcdCertSignerControllerDegraded . You can create a hint file that includes the NODEIP_HINT variable to override the default IP selection logic. For more information, see Optional: Overriding the default node IP selection logic. 7.5.1.1. Optional: Overriding the default node IP selection logic To override the default IP selection logic, you can create a hint file that includes the NODEIP_HINT variable to override the default IP selection logic. Creating a hint file allows you to select a specific node IP address from the interface in the subnet of the IP address specified in the NODEIP_HINT variable. For example, if a node has two interfaces, eth0 with an address of 10.0.0.10/24 , and eth1 with an address of 192.0.2.5/24 , and the default route points to eth0 ( 10.0.0.10 ),the node IP address would normally use the 10.0.0.10 IP address. Users can configure the NODEIP_HINT variable to point at a known IP in the subnet, for example, a subnet gateway such as 192.0.2.1 so that the other subnet, 192.0.2.0/24 , is selected. As a result, the 192.0.2.5 IP address on eth1 is used for the node. The following procedure shows how to override the default node IP selection logic. Procedure Add a hint file to your /etc/default/nodeip-configuration file, for example: NODEIP_HINT=192.0.2.1 Important Do not use the exact IP address of a node as a hint, for example, 192.0.2.5 . Using the exact IP address of a node causes the node using the hint IP address to fail to configure correctly. The IP address in the hint file is only used to determine the correct subnet. It will not receive traffic as a result of appearing in the hint file. Generate the base-64 encoded content by running the following command: USD echo -n 'NODEIP_HINT=192.0.2.1' | base64 -w0 Example output Tk9ERUlQX0hJTlQ9MTkyLjAuMCxxxx== Activate the hint by creating a machine config manifest for both master and worker roles before deploying the cluster: 99-nodeip-hint-master.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-nodeip-hint-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<encoded_content> 1 mode: 0644 overwrite: true path: /etc/default/nodeip-configuration 1 Replace <encoded_contents> with the base64-encoded content of the /etc/default/nodeip-configuration file, for example, Tk9ERUlQX0hJTlQ9MTkyLjAuMCxxxx== . Note that a space is not acceptable after the comma and before the encoded content. 99-nodeip-hint-worker.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-nodeip-hint-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<encoded_content> 1 mode: 0644 overwrite: true path: /etc/default/nodeip-configuration 1 Replace <encoded_contents> with the base64-encoded content of the /etc/default/nodeip-configuration file, for example, Tk9ERUlQX0hJTlQ9MTkyLjAuMCxxxx== . Note that a space is not acceptable after the comma and before the encoded content. Save the manifest to the directory where you store your cluster configuration, for example, ~/clusterconfigs . Deploy the cluster. 7.5.1.2. Configuring OVN-Kubernetes to use a secondary OVS bridge You can create an additional or secondary Open vSwitch (OVS) bridge, br-ex1 , that OVN-Kubernetes manages and the Multiple External Gateways (MEG) implementation uses for defining external gateways for an OpenShift Container Platform node. You can define a MEG in an AdminPolicyBasedExternalRoute custom resource (CR). The MEG implementation provides a pod with access to multiple gateways, equal-cost multipath (ECMP) routes, and the Bidirectional Forwarding Detection (BFD) implementation. Consider a use case for pods impacted by the Multiple External Gateways (MEG) feature and you want to egress traffic to a different interface, for example br-ex1 , on a node. Egress traffic for pods not impacted by MEG get routed to the default OVS br-ex bridge. Important Currently, MEG is unsupported for use with other egress features, such as egress IP, egress firewalls, or egress routers. Attempting to use MEG with egress features like egress IP can result in routing and traffic flow conflicts. This occurs because of how OVN-Kubernetes handles routing and source network address translation (SNAT). This results in inconsistent routing and might break connections in some environments where the return path must patch the incoming path. You must define the additional bridge in an interface definition of a machine configuration manifest file. The Machine Config Operator uses the manifest to create a new file at /etc/ovnk/extra_bridge on the host. The new file includes the name of the network interface that the additional OVS bridge configures for a node. After you create and edit the manifest file, the Machine Config Operator completes tasks in the following order: Drains nodes in singular order based on the selected machine configuration pool. Injects Ignition configuration files into each node, so that each node receives the additional br-ex1 bridge network configuration. Verify that the br-ex MAC address matches the MAC address for the interface that br-ex uses for the network connection. Executes the configure-ovs.sh shell script that references the new interface definition. Adds br-ex and br-ex1 to the host node. Uncordons the nodes. Note After all the nodes return to the Ready state and the OVN-Kubernetes Operator detects and configures br-ex and br-ex1 , the Operator applies the k8s.ovn.org/l3-gateway-config annotation to each node. For more information about useful situations for the additional br-ex1 bridge and a situation that always requires the default br-ex bridge, see "Configuration for a localnet topology". Procedure Optional: Create an interface connection that your additional bridge, br-ex1 , can use by completing the following steps. The example steps show the creation of a new bond and its dependent interfaces that are all defined in a machine configuration manifest file. The additional bridge uses the MachineConfig object to form a additional bond interface. Important Do not use the Kubernetes NMState Operator to define or a NodeNetworkConfigurationPolicy (NNCP) manifest file to define the additional interface. Also ensure that the additional interface or sub-interfaces when defining a bond interface are not used by an existing br-ex OVN Kubernetes network deployment. Create the following interface definition files. These files get added to a machine configuration manifest file so that host nodes can access the definition files. Example of the first interface definition file that is named eno1.config [connection] id=eno1 type=ethernet interface-name=eno1 master=bond1 slave-type=bond autoconnect=true autoconnect-priority=20 Example of the second interface definition file that is named eno2.config [connection] id=eno2 type=ethernet interface-name=eno2 master=bond1 slave-type=bond autoconnect=true autoconnect-priority=20 Example of the second bond interface definition file that is named bond1.config [connection] id=bond1 type=bond interface-name=bond1 autoconnect=true connection.autoconnect-slaves=1 autoconnect-priority=20 [bond] mode=802.3ad miimon=100 xmit_hash_policy="layer3+4" [ipv4] method=auto Convert the definition files to Base64 encoded strings by running the following command: USD base64 <directory_path>/en01.config USD base64 <directory_path>/eno2.config USD base64 <directory_path>/bond1.config Prepare the environment variables. Replace <machine_role> with the node role, such as worker , and replace <interface_name> with the name of your additional br-ex bridge name. USD export ROLE=<machine_role> Define each interface definition in a machine configuration manifest file: Example of a machine configuration file with definitions added for bond1 , eno1 , and en02 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: USD{worker} name: 12-USD{ROLE}-sec-bridge-cni spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:;base64,<base-64-encoded-contents-for-bond1.conf> path: /etc/NetworkManager/system-connections/bond1.nmconnection filesystem: root mode: 0600 - contents: source: data:;base64,<base-64-encoded-contents-for-eno1.conf> path: /etc/NetworkManager/system-connections/eno1.nmconnection filesystem: root mode: 0600 - contents: source: data:;base64,<base-64-encoded-contents-for-eno2.conf> path: /etc/NetworkManager/system-connections/eno2.nmconnection filesystem: root mode: 0600 # ... Create a machine configuration manifest file for configuring the network plugin by entering the following command in your terminal: USD oc create -f <machine_config_file_name> Create an Open vSwitch (OVS) bridge, br-ex1 , on nodes by using the OVN-Kubernetes network plugin to create an extra_bridge file`. Ensure that you save the file in the /etc/ovnk/extra_bridge path of the host. The file must state the interface name that supports the additional bridge and not the default interface that supports br-ex , which holds the primary IP address of the node. Example configuration for the extra_bridge file, /etc/ovnk/extra_bridge , that references a additional interface bond1 Create a machine configuration manifest file that defines the existing static interface that hosts br-ex1 on any nodes restarted on your cluster: Example of a machine configuration file that defines bond1 as the interface for hosting br-ex1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: USD{worker} name: 12-worker-extra-bridge spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/ovnk/extra_bridge mode: 0420 overwrite: true contents: source: data:text/plain;charset=utf-8,bond1 filesystem: root Apply the machine-configuration to your selected nodes: USD oc create -f <machine_config_file_name> Optional: You can override the br-ex selection logic for nodes by creating a machine configuration file that in turn creates a /var/lib/ovnk/iface_default_hint resource. Note The resource lists the name of the interface that br-ex selects for your cluster. By default, br-ex selects the primary interface for a node based on boot order and the IP address subnet in the machine network. Certain machine network configurations might require that br-ex continues to select the default interfaces or bonds for a host node. Create a machine configuration file on the host node to override the default interface. Important Only create this machine configuration file for the purposes of changing the br-ex selection logic. Using this file to change the IP addresses of existing nodes in your cluster is not supported. Example of a machine configuration file that overrides the default interface apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: USD{worker} name: 12-worker-br-ex-override spec: config: ignition: version: 3.2.0 storage: files: - path: /var/lib/ovnk/iface_default_hint mode: 0420 overwrite: true contents: source: data:text/plain;charset=utf-8,bond0 1 filesystem: root 1 Ensure bond0 exists on the node before you apply the machine configuration file to the node. Before you apply the configuration to all new nodes in your cluster, reboot the host node to verify that br-ex selects the intended interface and does not conflict with the new interfaces that you defined on br-ex1 . Apply the machine configuration file to all new nodes in your cluster: USD oc create -f <machine_config_file_name> Verification Identify the IP addresses of nodes with the exgw-ip-addresses label in your cluster to verify that the nodes use the additional bridge instead of the default bridge: USD oc get nodes -o json | grep --color exgw-ip-addresses Example output "k8s.ovn.org/l3-gateway-config": \"exgw-ip-address\":\"172.xx.xx.yy/24\",\"-hops\":[\"xx.xx.xx.xx\"], Observe that the additional bridge exists on target nodes by reviewing the network interface names on the host node: USD oc debug node/<node_name> -- chroot /host sh -c "ip a | grep mtu | grep br-ex" Example output Starting pod/worker-1-debug ... To use host binaries, run `chroot /host` # ... 5: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 6: br-ex1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 Optional: If you use /var/lib/ovnk/iface_default_hint , check that the MAC address of br-ex matches the MAC address of the primary selected interface: USD oc debug node/<node_name> -- chroot /host sh -c "ip a | grep -A1 -E 'br-ex|bond0' Example output that shows the primary interface for br-ex as bond0 Starting pod/worker-1-debug ... To use host binaries, run `chroot /host` # ... sh-5.1# ip a | grep -A1 -E 'br-ex|bond0' 2: bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP group default qlen 1000 link/ether fa:16:3e:47:99:98 brd ff:ff:ff:ff:ff:ff -- 5: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether fa:16:3e:47:99:98 brd ff:ff:ff:ff:ff:ff inet 10.xx.xx.xx/21 brd 10.xx.xx.255 scope global dynamic noprefixroute br-ex Additional resources Configure an external gateway on the default network 7.5.2. Troubleshooting Open vSwitch issues To troubleshoot some Open vSwitch (OVS) issues, you might need to configure the log level to include more information. If you modify the log level on a node temporarily, be aware that you can receive log messages from the machine config daemon on the node like the following example: E0514 12:47:17.998892 2281 daemon.go:1350] content mismatch for file /etc/systemd/system/ovs-vswitchd.service: [Unit] To avoid the log messages related to the mismatch, revert the log level change after you complete your troubleshooting. 7.5.2.1. Configuring the Open vSwitch log level temporarily For short-term troubleshooting, you can configure the Open vSwitch (OVS) log level temporarily. The following procedure does not require rebooting the node. In addition, the configuration change does not persist whenever you reboot the node. After you perform this procedure to change the log level, you can receive log messages from the machine config daemon that indicate a content mismatch for the ovs-vswitchd.service . To avoid the log messages, repeat this procedure and set the log level to the original value. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Start a debug pod for a node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell. The debug pod mounts the root file system from the host in /host within the pod. By changing the root directory to /host , you can run binaries from the host file system: # chroot /host View the current syslog level for OVS modules: # ovs-appctl vlog/list The following example output shows the log level for syslog set to info . Example output console syslog file ------- ------ ------ backtrace OFF INFO INFO bfd OFF INFO INFO bond OFF INFO INFO bridge OFF INFO INFO bundle OFF INFO INFO bundles OFF INFO INFO cfm OFF INFO INFO collectors OFF INFO INFO command_line OFF INFO INFO connmgr OFF INFO INFO conntrack OFF INFO INFO conntrack_tp OFF INFO INFO coverage OFF INFO INFO ct_dpif OFF INFO INFO daemon OFF INFO INFO daemon_unix OFF INFO INFO dns_resolve OFF INFO INFO dpdk OFF INFO INFO ... Specify the log level in the /etc/systemd/system/ovs-vswitchd.service.d/10-ovs-vswitchd-restart.conf file: Restart=always ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /var/lib/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /etc/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /run/openvswitch' ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg In the preceding example, the log level is set to dbg . Change the last two lines by setting syslog:<log_level> to off , emer , err , warn , info , or dbg . The off log level filters out all log messages. Restart the service: # systemctl daemon-reload # systemctl restart ovs-vswitchd 7.5.2.2. Configuring the Open vSwitch log level permanently For long-term changes to the Open vSwitch (OVS) log level, you can change the log level permanently. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Create a file, such as 99-change-ovs-loglevel.yaml , with a MachineConfig object like the following example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master 1 name: 99-change-ovs-loglevel spec: config: ignition: version: 3.2.0 systemd: units: - dropins: - contents: | [Service] ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg 2 ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg name: 20-ovs-vswitchd-restart.conf name: ovs-vswitchd.service 1 After you perform this procedure to configure control plane nodes, repeat the procedure and set the role to worker to configure worker nodes. 2 Set the syslog:<log_level> value. Log levels are off , emer , err , warn , info , or dbg . Setting the value to off filters out all log messages. Apply the machine config: USD oc apply -f 99-change-ovs-loglevel.yaml Additional resources Understanding the Machine Config Operator Checking machine config pool status 7.5.2.3. Displaying Open vSwitch logs Use the following procedure to display Open vSwitch (OVS) logs. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Run one of the following commands: Display the logs by using the oc command from outside the cluster: USD oc adm node-logs <node_name> -u ovs-vswitchd Display the logs after logging on to a node in the cluster: # journalctl -b -f -u ovs-vswitchd.service One way to log on to a node is by using the oc debug node/<node_name> command. 7.6. Troubleshooting Operator issues Operators are a method of packaging, deploying, and managing an OpenShift Container Platform application. They act like an extension of the software vendor's engineering team, watching over an OpenShift Container Platform environment and using its current state to make decisions in real time. Operators are designed to handle upgrades seamlessly, react to failures automatically, and not take shortcuts, such as skipping a software backup process to save time. OpenShift Container Platform 4.15 includes a default set of Operators that are required for proper functioning of the cluster. These default Operators are managed by the Cluster Version Operator (CVO). As a cluster administrator, you can install application Operators from the OperatorHub using the OpenShift Container Platform web console or the CLI. You can then subscribe the Operator to one or more namespaces to make it available for developers on your cluster. Application Operators are managed by Operator Lifecycle Manager (OLM). If you experience Operator issues, verify Operator subscription status. Check Operator pod health across the cluster and gather Operator logs for diagnosis. 7.6.1. Operator subscription condition types Subscriptions can report the following condition types: Table 7.2. Subscription condition types Condition Description CatalogSourcesUnhealthy Some or all of the catalog sources to be used in resolution are unhealthy. InstallPlanMissing An install plan for a subscription is missing. InstallPlanPending An install plan for a subscription is pending installation. InstallPlanFailed An install plan for a subscription has failed. ResolutionFailed The dependency resolution for a subscription has failed. Note Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object. Additional resources Catalog health requirements 7.6.2. Viewing Operator subscription status by using the CLI You can view Operator subscription status by using the CLI. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List Operator subscriptions: USD oc get subs -n <operator_namespace> Use the oc describe command to inspect a Subscription resource: USD oc describe sub <subscription_name> -n <operator_namespace> In the command output, find the Conditions section for the status of Operator subscription condition types. In the following example, the CatalogSourcesUnhealthy condition type has a status of false because all available catalog sources are healthy: Example output Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription # ... Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy # ... Note Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object. 7.6.3. Viewing Operator catalog source status by using the CLI You can view the status of an Operator catalog source by using the CLI. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List the catalog sources in a namespace. For example, you can check the openshift-marketplace namespace, which is used for cluster-wide catalog sources: USD oc get catalogsources -n openshift-marketplace Example output NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m Use the oc describe command to get more details and status about a catalog source: USD oc describe catalogsource example-catalog -n openshift-marketplace Example output Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource # ... Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace # ... In the preceding example output, the last observed state is TRANSIENT_FAILURE . This state indicates that there is a problem establishing a connection for the catalog source. List the pods in the namespace where your catalog source was created: USD oc get pods -n openshift-marketplace Example output NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m When a catalog source is created in a namespace, a pod for the catalog source is created in that namespace. In the preceding example output, the status for the example-catalog-bwt8z pod is ImagePullBackOff . This status indicates that there is an issue pulling the catalog source's index image. Use the oc describe command to inspect a pod for more detailed information: USD oc describe pod example-catalog-bwt8z -n openshift-marketplace Example output Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image "quay.io/example-org/example-catalog:v1": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull In the preceding example output, the error messages indicate that the catalog source's index image is failing to pull successfully because of an authorization issue. For example, the index image might be stored in a registry that requires login credentials. Additional resources Operator Lifecycle Manager concepts and resources Catalog source gRPC documentation: States of Connectivity Accessing images for Operators from private registries 7.6.4. Querying Operator pod status You can list Operator pods within a cluster and their status. You can also collect a detailed Operator pod summary. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure List Operators running in the cluster. The output includes Operator version, availability, and up-time information: USD oc get clusteroperators List Operator pods running in the Operator's namespace, plus pod status, restarts, and age: USD oc get pod -n <operator_namespace> Output a detailed Operator pod summary: USD oc describe pod <operator_pod_name> -n <operator_namespace> If an Operator issue is node-specific, query Operator container status on that node. Start a debug pod for the node: USD oc debug node/my-node Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Note OpenShift Container Platform 4.15 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. List details about the node's containers, including state and associated pod IDs: # crictl ps List information about a specific Operator container on the node. The following example lists information about the network-operator container: # crictl ps --name network-operator Exit from the debug shell. 7.6.5. Gathering Operator logs If you experience Operator issues, you can gather detailed diagnostic information from Operator pod logs. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). You have the fully qualified domain names of the control plane or control plane machines. Procedure List the Operator pods that are running in the Operator's namespace, plus the pod status, restarts, and age: USD oc get pods -n <operator_namespace> Review logs for an Operator pod: USD oc logs pod/<pod_name> -n <operator_namespace> If an Operator pod has multiple containers, the preceding command will produce an error that includes the name of each container. Query logs from an individual container: USD oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace> If the API is not functional, review Operator pod and container logs on each control plane node by using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values. List pods on each control plane node: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods For any Operator pods not showing a Ready status, inspect the pod's status in detail. Replace <operator_pod_id> with the Operator pod's ID listed in the output of the preceding command: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id> List containers related to an Operator pod: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id> For any Operator container not showing a Ready status, inspect the container's status in detail. Replace <container_id> with a container ID listed in the output of the preceding command: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id> Review the logs for any Operator containers not showing a Ready status. Replace <container_id> with a container ID listed in the output of the preceding command: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id> Note OpenShift Container Platform 4.15 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . 7.6.6. Disabling the Machine Config Operator from automatically rebooting When configuration changes are made by the Machine Config Operator (MCO), Red Hat Enterprise Linux CoreOS (RHCOS) must reboot for the changes to take effect. Whether the configuration change is automatic or manual, an RHCOS node reboots automatically unless it is paused. Note The following modifications do not trigger a node reboot: When the MCO detects any of the following changes, it applies the update without draining or rebooting the node: Changes to the SSH key in the spec.config.passwd.users.sshAuthorizedKeys parameter of a machine config. Changes to the global pull secret or pull secret in the openshift-config namespace. Automatic rotation of the /etc/kubernetes/kubelet-ca.crt certificate authority (CA) by the Kubernetes API Server Operator. When the MCO detects changes to the /etc/containers/registries.conf file, such as adding or editing an ImageDigestMirrorSet , ImageTagMirrorSet , or ImageContentSourcePolicy object, it drains the corresponding nodes, applies the changes, and uncordons the nodes. The node drain does not happen for the following changes: The addition of a registry with the pull-from-mirror = "digest-only" parameter set for each mirror. The addition of a mirror with the pull-from-mirror = "digest-only" parameter set in a registry. The addition of items to the unqualified-search-registries list. To avoid unwanted disruptions, you can modify the machine config pool (MCP) to prevent automatic rebooting after the Operator makes changes to the machine config. 7.6.6.1. Disabling the Machine Config Operator from automatically rebooting by using the console To avoid unwanted disruptions from changes made by the Machine Config Operator (MCO), you can use the OpenShift Container Platform web console to modify the machine config pool (MCP) to prevent the MCO from making any changes to nodes in that pool. This prevents any reboots that would normally be part of the MCO update process. Note See second NOTE in Disabling the Machine Config Operator from automatically rebooting . Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure To pause or unpause automatic MCO update rebooting: Pause the autoreboot process: Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Click Compute MachineConfigPools . On the MachineConfigPools page, click either master or worker , depending upon which nodes you want to pause rebooting for. On the master or worker page, click YAML . In the YAML, update the spec.paused field to true . Sample MachineConfigPool object apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool # ... spec: # ... paused: true 1 # ... 1 Update the spec.paused field to true to pause rebooting. To verify that the MCP is paused, return to the MachineConfigPools page. On the MachineConfigPools page, the Paused column reports True for the MCP you modified. If the MCP has pending changes while paused, the Updated column is False and Updating is False . When Updated is True and Updating is False , there are no pending changes. Important If there are pending changes (where both the Updated and Updating columns are False ), it is recommended to schedule a maintenance window for a reboot as early as possible. Use the following steps for unpausing the autoreboot process to apply the changes that were queued since the last reboot. Unpause the autoreboot process: Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Click Compute MachineConfigPools . On the MachineConfigPools page, click either master or worker , depending upon which nodes you want to pause rebooting for. On the master or worker page, click YAML . In the YAML, update the spec.paused field to false . Sample MachineConfigPool object apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool # ... spec: # ... paused: false 1 # ... 1 Update the spec.paused field to false to allow rebooting. Note By unpausing an MCP, the MCO applies all paused changes reboots Red Hat Enterprise Linux CoreOS (RHCOS) as needed. To verify that the MCP is paused, return to the MachineConfigPools page. On the MachineConfigPools page, the Paused column reports False for the MCP you modified. If the MCP is applying any pending changes, the Updated column is False and the Updating column is True . When Updated is True and Updating is False , there are no further changes being made. 7.6.6.2. Disabling the Machine Config Operator from automatically rebooting by using the CLI To avoid unwanted disruptions from changes made by the Machine Config Operator (MCO), you can modify the machine config pool (MCP) using the OpenShift CLI (oc) to prevent the MCO from making any changes to nodes in that pool. This prevents any reboots that would normally be part of the MCO update process. Note See second NOTE in Disabling the Machine Config Operator from automatically rebooting . Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure To pause or unpause automatic MCO update rebooting: Pause the autoreboot process: Update the MachineConfigPool custom resource to set the spec.paused field to true . Control plane (master) nodes USD oc patch --type=merge --patch='{"spec":{"paused":true}}' machineconfigpool/master Worker nodes USD oc patch --type=merge --patch='{"spec":{"paused":true}}' machineconfigpool/worker Verify that the MCP is paused: Control plane (master) nodes USD oc get machineconfigpool/master --template='{{.spec.paused}}' Worker nodes USD oc get machineconfigpool/worker --template='{{.spec.paused}}' Example output true The spec.paused field is true and the MCP is paused. Determine if the MCP has pending changes: # oc get machineconfigpool Example output If the UPDATED column is False and UPDATING is False , there are pending changes. When UPDATED is True and UPDATING is False , there are no pending changes. In the example, the worker node has pending changes. The control plane node does not have any pending changes. Important If there are pending changes (where both the Updated and Updating columns are False ), it is recommended to schedule a maintenance window for a reboot as early as possible. Use the following steps for unpausing the autoreboot process to apply the changes that were queued since the last reboot. Unpause the autoreboot process: Update the MachineConfigPool custom resource to set the spec.paused field to false . Control plane (master) nodes USD oc patch --type=merge --patch='{"spec":{"paused":false}}' machineconfigpool/master Worker nodes USD oc patch --type=merge --patch='{"spec":{"paused":false}}' machineconfigpool/worker Note By unpausing an MCP, the MCO applies all paused changes and reboots Red Hat Enterprise Linux CoreOS (RHCOS) as needed. Verify that the MCP is unpaused: Control plane (master) nodes USD oc get machineconfigpool/master --template='{{.spec.paused}}' Worker nodes USD oc get machineconfigpool/worker --template='{{.spec.paused}}' Example output false The spec.paused field is false and the MCP is unpaused. Determine if the MCP has pending changes: USD oc get machineconfigpool Example output If the MCP is applying any pending changes, the UPDATED column is False and the UPDATING column is True . When UPDATED is True and UPDATING is False , there are no further changes being made. In the example, the MCO is updating the worker node. 7.6.7. Refreshing failing subscriptions In Operator Lifecycle Manager (OLM), if you subscribe to an Operator that references images that are not accessible on your network, you can find jobs in the openshift-marketplace namespace that are failing with the following errors: Example output ImagePullBackOff for Back-off pulling image "example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e" Example output rpc error: code = Unknown desc = error pinging docker registry example.com: Get "https://example.com/v2/": dial tcp: lookup example.com on 10.0.0.1:53: no such host As a result, the subscription is stuck in this failing state and the Operator is unable to install or upgrade. You can refresh a failing subscription by deleting the subscription, cluster service version (CSV), and other related objects. After recreating the subscription, OLM then reinstalls the correct version of the Operator. Prerequisites You have a failing subscription that is unable to pull an inaccessible bundle image. You have confirmed that the correct bundle image is accessible. Procedure Get the names of the Subscription and ClusterServiceVersion objects from the namespace where the Operator is installed: USD oc get sub,csv -n <namespace> Example output NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded Delete the subscription: USD oc delete subscription <subscription_name> -n <namespace> Delete the cluster service version: USD oc delete csv <csv_name> -n <namespace> Get the names of any failing jobs and related config maps in the openshift-marketplace namespace: USD oc get job,configmap -n openshift-marketplace Example output NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s Delete the job: USD oc delete job <job_name> -n openshift-marketplace This ensures pods that try to pull the inaccessible image are not recreated. Delete the config map: USD oc delete configmap <configmap_name> -n openshift-marketplace Reinstall the Operator using OperatorHub in the web console. Verification Check that the Operator has been reinstalled successfully: USD oc get sub,csv,installplan -n <namespace> 7.6.8. Reinstalling Operators after failed uninstallation You must successfully and completely uninstall an Operator prior to attempting to reinstall the same Operator. Failure to fully uninstall the Operator properly can leave resources, such as a project or namespace, stuck in a "Terminating" state and cause "error resolving resource" messages. For example: Example Project resource description These types of issues can prevent an Operator from being reinstalled successfully. Warning Forced deletion of a namespace is not likely to resolve "Terminating" state issues and can lead to unstable or unpredictable cluster behavior, so it is better to try to find related resources that might be preventing the namespace from being deleted. For more information, see the Red Hat Knowledgebase Solution #4165791 , paying careful attention to the cautions and warnings. The following procedure shows how to troubleshoot when an Operator cannot be reinstalled because an existing custom resource definition (CRD) from a installation of the Operator is preventing a related namespace from deleting successfully. Procedure Check if there are any namespaces related to the Operator that are stuck in "Terminating" state: USD oc get namespaces Example output Check if there are any CRDs related to the Operator that are still present after the failed uninstallation: USD oc get crds Note CRDs are global cluster definitions; the actual custom resource (CR) instances related to the CRDs could be in other namespaces or be global cluster instances. If there are any CRDs that you know were provided or managed by the Operator and that should have been deleted after uninstallation, delete the CRD: USD oc delete crd <crd_name> Check if there are any remaining CR instances related to the Operator that are still present after uninstallation, and if so, delete the CRs: The type of CRs to search for can be difficult to determine after uninstallation and can require knowing what CRDs the Operator manages. For example, if you are troubleshooting an uninstallation of the etcd Operator, which provides the EtcdCluster CRD, you can search for remaining EtcdCluster CRs in a namespace: USD oc get EtcdCluster -n <namespace_name> Alternatively, you can search across all namespaces: USD oc get EtcdCluster --all-namespaces If there are any remaining CRs that should be removed, delete the instances: USD oc delete <cr_name> <cr_instance_name> -n <namespace_name> Check that the namespace deletion has successfully resolved: USD oc get namespace <namespace_name> Important If the namespace or other Operator resources are still not uninstalled cleanly, contact Red Hat Support. Reinstall the Operator using OperatorHub in the web console. Verification Check that the Operator has been reinstalled successfully: USD oc get sub,csv,installplan -n <namespace> Additional resources Deleting Operators from a cluster Adding Operators to a cluster 7.7. Investigating pod issues OpenShift Container Platform leverages the Kubernetes concept of a pod, which is one or more containers deployed together on one host. A pod is the smallest compute unit that can be defined, deployed, and managed on OpenShift Container Platform 4.15. After a pod is defined, it is assigned to run on a node until its containers exit, or until it is removed. Depending on policy and exit code, Pods are either removed after exiting or retained so that their logs can be accessed. The first thing to check when pod issues arise is the pod's status. If an explicit pod failure has occurred, observe the pod's error state to identify specific image, container, or pod network issues. Focus diagnostic data collection according to the error state. Review pod event messages, as well as pod and container log information. Diagnose issues dynamically by accessing running Pods on the command line, or start a debug pod with root access based on a problematic pod's deployment configuration. 7.7.1. Understanding pod error states Pod failures return explicit error states that can be observed in the status field in the output of oc get pods . Pod error states cover image, container, and container network related failures. The following table provides a list of pod error states along with their descriptions. Table 7.3. Pod error states Pod error state Description ErrImagePull Generic image retrieval error. ErrImagePullBackOff Image retrieval failed and is backed off. ErrInvalidImageName The specified image name was invalid. ErrImageInspect Image inspection did not succeed. ErrImageNeverPull PullPolicy is set to NeverPullImage and the target image is not present locally on the host. ErrRegistryUnavailable When attempting to retrieve an image from a registry, an HTTP error was encountered. ErrContainerNotFound The specified container is either not present or not managed by the kubelet, within the declared pod. ErrRunInitContainer Container initialization failed. ErrRunContainer None of the pod's containers started successfully. ErrKillContainer None of the pod's containers were killed successfully. ErrCrashLoopBackOff A container has terminated. The kubelet will not attempt to restart it. ErrVerifyNonRoot A container or image attempted to run with root privileges. ErrCreatePodSandbox Pod sandbox creation did not succeed. ErrConfigPodSandbox Pod sandbox configuration was not obtained. ErrKillPodSandbox A pod sandbox did not stop successfully. ErrSetupNetwork Network initialization failed. ErrTeardownNetwork Network termination failed. 7.7.2. Reviewing pod status You can query pod status and error states. You can also query a pod's associated deployment configuration and review base image availability. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). skopeo is installed. Procedure Switch into a project: USD oc project <project_name> List pods running within the namespace, as well as pod status, error states, restarts, and age: USD oc get pods Determine whether the namespace is managed by a deployment configuration: USD oc status If the namespace is managed by a deployment configuration, the output includes the deployment configuration name and a base image reference. Inspect the base image referenced in the preceding command's output: USD skopeo inspect docker://<image_reference> If the base image reference is not correct, update the reference in the deployment configuration: USD oc edit deployment/my-deployment When deployment configuration changes on exit, the configuration will automatically redeploy. Watch pod status as the deployment progresses, to determine whether the issue has been resolved: USD oc get pods -w Review events within the namespace for diagnostic information relating to pod failures: USD oc get events 7.7.3. Inspecting pod and container logs You can inspect pod and container logs for warnings and error messages related to explicit pod failures. Depending on policy and exit code, pod and container logs remain available after pods have been terminated. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure Query logs for a specific pod: USD oc logs <pod_name> Query logs for a specific container within a pod: USD oc logs <pod_name> -c <container_name> Logs retrieved using the preceding oc logs commands are composed of messages sent to stdout within pods or containers. Inspect logs contained in /var/log/ within a pod. List log files and subdirectories contained in /var/log within a pod: USD oc exec <pod_name> -- ls -alh /var/log Example output total 124K drwxr-xr-x. 1 root root 33 Aug 11 11:23 . drwxr-xr-x. 1 root root 28 Sep 6 2022 .. -rw-rw----. 1 root utmp 0 Jul 10 10:31 btmp -rw-r--r--. 1 root root 33K Jul 17 10:07 dnf.librepo.log -rw-r--r--. 1 root root 69K Jul 17 10:07 dnf.log -rw-r--r--. 1 root root 8.8K Jul 17 10:07 dnf.rpm.log -rw-r--r--. 1 root root 480 Jul 17 10:07 hawkey.log -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 lastlog drwx------. 2 root root 23 Aug 11 11:14 openshift-apiserver drwx------. 2 root root 6 Jul 10 10:31 private drwxr-xr-x. 1 root root 22 Mar 9 08:05 rhsm -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 wtmp Query a specific log file contained in /var/log within a pod: USD oc exec <pod_name> cat /var/log/<path_to_log> Example output 2023-07-10T10:29:38+0000 INFO --- logging initialized --- 2023-07-10T10:29:38+0000 DDEBUG timer: config: 13 ms 2023-07-10T10:29:38+0000 DEBUG Loaded plugins: builddep, changelog, config-manager, copr, debug, debuginfo-install, download, generate_completion_cache, groups-manager, needs-restarting, playground, product-id, repoclosure, repodiff, repograph, repomanage, reposync, subscription-manager, uploadprofile 2023-07-10T10:29:38+0000 INFO Updating Subscription Management repositories. 2023-07-10T10:29:38+0000 INFO Unable to read consumer identity 2023-07-10T10:29:38+0000 INFO Subscription Manager is operating in container mode. 2023-07-10T10:29:38+0000 INFO List log files and subdirectories contained in /var/log within a specific container: USD oc exec <pod_name> -c <container_name> ls /var/log Query a specific log file contained in /var/log within a specific container: USD oc exec <pod_name> -c <container_name> cat /var/log/<path_to_log> 7.7.4. Accessing running pods You can review running pods dynamically by opening a shell inside a pod or by gaining network access through port forwarding. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure Switch into the project that contains the pod you would like to access. This is necessary because the oc rsh command does not accept the -n namespace option: USD oc project <namespace> Start a remote shell into a pod: USD oc rsh <pod_name> 1 1 If a pod has multiple containers, oc rsh defaults to the first container unless -c <container_name> is specified. Start a remote shell into a specific container within a pod: USD oc rsh -c <container_name> pod/<pod_name> Create a port forwarding session to a port on a pod: USD oc port-forward <pod_name> <host_port>:<pod_port> 1 1 Enter Ctrl+C to cancel the port forwarding session. 7.7.5. Starting debug pods with root access You can start a debug pod with root access, based on a problematic pod's deployment or deployment configuration. Pod users typically run with non-root privileges, but running troubleshooting pods with temporary root privileges can be useful during issue investigation. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure Start a debug pod with root access, based on a deployment. Obtain a project's deployment name: USD oc get deployment -n <project_name> Start a debug pod with root privileges, based on the deployment: USD oc debug deployment/my-deployment --as-root -n <project_name> Start a debug pod with root access, based on a deployment configuration. Obtain a project's deployment configuration name: USD oc get deploymentconfigs -n <project_name> Start a debug pod with root privileges, based on the deployment configuration: USD oc debug deploymentconfig/my-deployment-configuration --as-root -n <project_name> Note You can append -- <command> to the preceding oc debug commands to run individual commands within a debug pod, instead of running an interactive shell. 7.7.6. Copying files to and from pods and containers You can copy files to and from a pod to test configuration changes or gather diagnostic information. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure Copy a file to a pod: USD oc cp <local_path> <pod_name>:/<path> -c <container_name> 1 1 The first container in a pod is selected if the -c option is not specified. Copy a file from a pod: USD oc cp <pod_name>:/<path> -c <container_name> <local_path> 1 1 The first container in a pod is selected if the -c option is not specified. Note For oc cp to function, the tar binary must be available within the container. 7.8. Troubleshooting the Source-to-Image process 7.8.1. Strategies for Source-to-Image troubleshooting Use Source-to-Image (S2I) to build reproducible, Docker-formatted container images. You can create ready-to-run images by injecting application source code into a container image and assembling a new image. The new image incorporates the base image (the builder) and built source. To determine where in the S2I process a failure occurs, you can observe the state of the pods relating to each of the following S2I stages: During the build configuration stage , a build pod is used to create an application container image from a base image and application source code. During the deployment configuration stage , a deployment pod is used to deploy application pods from the application container image that was built in the build configuration stage. The deployment pod also deploys other resources such as services and routes. The deployment configuration begins after the build configuration succeeds. After the deployment pod has started the application pods , application failures can occur within the running application pods. For instance, an application might not behave as expected even though the application pods are in a Running state. In this scenario, you can access running application pods to investigate application failures within a pod. When troubleshooting S2I issues, follow this strategy: Monitor build, deployment, and application pod status Determine the stage of the S2I process where the problem occurred Review logs corresponding to the failed stage 7.8.2. Gathering Source-to-Image diagnostic data The S2I tool runs a build pod and a deployment pod in sequence. The deployment pod is responsible for deploying the application pods based on the application container image created in the build stage. Watch build, deployment and application pod status to determine where in the S2I process a failure occurs. Then, focus diagnostic data collection accordingly. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure Watch the pod status throughout the S2I process to determine at which stage a failure occurs: USD oc get pods -w 1 1 Use -w to monitor pods for changes until you quit the command using Ctrl+C . Review a failed pod's logs for errors. If the build pod fails , review the build pod's logs: USD oc logs -f pod/<application_name>-<build_number>-build Note Alternatively, you can review the build configuration's logs using oc logs -f bc/<application_name> . The build configuration's logs include the logs from the build pod. If the deployment pod fails , review the deployment pod's logs: USD oc logs -f pod/<application_name>-<build_number>-deploy Note Alternatively, you can review the deployment configuration's logs using oc logs -f dc/<application_name> . This outputs logs from the deployment pod until the deployment pod completes successfully. The command outputs logs from the application pods if you run it after the deployment pod has completed. After a deployment pod completes, its logs can still be accessed by running oc logs -f pod/<application_name>-<build_number>-deploy . If an application pod fails, or if an application is not behaving as expected within a running application pod , review the application pod's logs: USD oc logs -f pod/<application_name>-<build_number>-<random_string> 7.8.3. Gathering application diagnostic data to investigate application failures Application failures can occur within running application pods. In these situations, you can retrieve diagnostic information with these strategies: Review events relating to the application pods. Review the logs from the application pods, including application-specific log files that are not collected by the OpenShift Logging framework. Test application functionality interactively and run diagnostic tools in an application container. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List events relating to a specific application pod. The following example retrieves events for an application pod named my-app-1-akdlg : USD oc describe pod/my-app-1-akdlg Review logs from an application pod: USD oc logs -f pod/my-app-1-akdlg Query specific logs within a running application pod. Logs that are sent to stdout are collected by the OpenShift Logging framework and are included in the output of the preceding command. The following query is only required for logs that are not sent to stdout. If an application log can be accessed without root privileges within a pod, concatenate the log file as follows: USD oc exec my-app-1-akdlg -- cat /var/log/my-application.log If root access is required to view an application log, you can start a debug container with root privileges and then view the log file from within the container. Start the debug container from the project's DeploymentConfig object. Pod users typically run with non-root privileges, but running troubleshooting pods with temporary root privileges can be useful during issue investigation: USD oc debug dc/my-deployment-configuration --as-root -- cat /var/log/my-application.log Note You can access an interactive shell with root access within the debug pod if you run oc debug dc/<deployment_configuration> --as-root without appending -- <command> . Test application functionality interactively and run diagnostic tools, in an application container with an interactive shell. Start an interactive shell on the application container: USD oc exec -it my-app-1-akdlg /bin/bash Test application functionality interactively from within the shell. For example, you can run the container's entry point command and observe the results. Then, test changes from the command line directly, before updating the source code and rebuilding the application container through the S2I process. Run diagnostic binaries available within the container. Note Root privileges are required to run some diagnostic binaries. In these situations you can start a debug pod with root access, based on a problematic pod's DeploymentConfig object, by running oc debug dc/<deployment_configuration> --as-root . Then, you can run diagnostic binaries as root from within the debug pod. If diagnostic binaries are not available within a container, you can run a host's diagnostic binaries within a container's namespace by using nsenter . The following example runs ip ad within a container's namespace, using the host`s ip binary. Enter into a debug session on the target node. This step instantiates a debug pod called <node_name>-debug : USD oc debug node/my-cluster-node Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Note OpenShift Container Platform 4.15 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. Determine the target container ID: # crictl ps Determine the container's process ID. In this example, the target container ID is a7fe32346b120 : # crictl inspect a7fe32346b120 --output yaml | grep 'pid:' | awk '{print USD2}' Run ip ad within the container's namespace, using the host's ip binary. This example uses 31150 as the container's process ID. The nsenter command enters the namespace of a target process and runs a command in its namespace. Because the target process in this example is a container's process ID, the ip ad command is run in the container's namespace from the host: # nsenter -n -t 31150 -- ip ad Note Running a host's diagnostic binaries within a container's namespace is only possible if you are using a privileged container such as a debug node. 7.8.4. Additional resources See Source-to-Image (S2I) build for more details about the S2I build strategy. 7.9. Troubleshooting storage issues 7.9.1. Resolving multi-attach errors When a node crashes or shuts down abruptly, the attached ReadWriteOnce (RWO) volume is expected to be unmounted from the node so that it can be used by a pod scheduled on another node. However, mounting on a new node is not possible because the failed node is unable to unmount the attached volume. A multi-attach error is reported: Example output Unable to attach or mount volumes: unmounted volumes=[sso-mysql-pvol], unattached volumes=[sso-mysql-pvol default-token-x4rzc]: timed out waiting for the condition Multi-Attach error for volume "pvc-8837384d-69d7-40b2-b2e6-5df86943eef9" Volume is already used by pod(s) sso-mysql-1-ns6b4 Procedure To resolve the multi-attach issue, use one of the following solutions: Enable multiple attachments by using RWX volumes. For most storage solutions, you can use ReadWriteMany (RWX) volumes to prevent multi-attach errors. Recover or delete the failed node when using an RWO volume. For storage that does not support RWX, such as VMware vSphere, RWO volumes must be used instead. However, RWO volumes cannot be mounted on multiple nodes. If you encounter a multi-attach error message with an RWO volume, force delete the pod on a shutdown or crashed node to avoid data loss in critical workloads, such as when dynamic persistent volumes are attached. USD oc delete pod <old_pod> --force=true --grace-period=0 This command deletes the volumes stuck on shutdown or crashed nodes after six minutes. 7.10. Troubleshooting Windows container workload issues 7.10.1. Windows Machine Config Operator does not install If you have completed the process of installing the Windows Machine Config Operator (WMCO), but the Operator is stuck in the InstallWaiting phase, your issue is likely caused by a networking issue. The WMCO requires your OpenShift Container Platform cluster to be configured with hybrid networking using OVN-Kubernetes; the WMCO cannot complete the installation process without hybrid networking available. This is necessary to manage nodes on multiple operating systems (OS) and OS variants. This must be completed during the installation of your cluster. For more information, see Configuring hybrid networking . 7.10.2. Investigating why Windows Machine does not become compute node There are various reasons why a Windows Machine does not become a compute node. The best way to investigate this problem is to collect the Windows Machine Config Operator (WMCO) logs. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You have created a Windows compute machine set. Procedure Run the following command to collect the WMCO logs: USD oc logs -f deployment/windows-machine-config-operator -n openshift-windows-machine-config-operator 7.10.3. Accessing a Windows node Windows nodes cannot be accessed using the oc debug node command; the command requires running a privileged pod on the node, which is not yet supported for Windows. Instead, a Windows node can be accessed using a secure shell (SSH) or Remote Desktop Protocol (RDP). An SSH bastion is required for both methods. 7.10.3.1. Accessing a Windows node using SSH You can access a Windows node by using a secure shell (SSH). Prerequisites You have installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You have created a Windows compute machine set. You have added the key used in the cloud-private-key secret and the key used when creating the cluster to the ssh-agent. For security reasons, remember to remove the keys from the ssh-agent after use. You have connected to the Windows node using an ssh-bastion pod . Procedure Access the Windows node by running the following command: USD ssh -t -o StrictHostKeyChecking=no -o ProxyCommand='ssh -A -o StrictHostKeyChecking=no \ -o ServerAliveInterval=30 -W %h:%p core@USD(oc get service --all-namespaces -l run=ssh-bastion \ -o go-template="{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}")' <username>@<windows_node_internal_ip> 1 2 1 Specify the cloud provider username, such as Administrator for Amazon Web Services (AWS) or capi for Microsoft Azure. 2 Specify the internal IP address of the node, which can be discovered by running the following command: USD oc get nodes <node_name> -o jsonpath={.status.addresses[?\(@.type==\"InternalIP\"\)].address} 7.10.3.2. Accessing a Windows node using RDP You can access a Windows node by using a Remote Desktop Protocol (RDP). Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You have created a Windows compute machine set. You have added the key used in the cloud-private-key secret and the key used when creating the cluster to the ssh-agent. For security reasons, remember to remove the keys from the ssh-agent after use. You have connected to the Windows node using an ssh-bastion pod . Procedure Run the following command to set up an SSH tunnel: USD ssh -L 2020:<windows_node_internal_ip>:3389 \ 1 core@USD(oc get service --all-namespaces -l run=ssh-bastion -o go-template="{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}") 1 Specify the internal IP address of the node, which can be discovered by running the following command: USD oc get nodes <node_name> -o jsonpath={.status.addresses[?\(@.type==\"InternalIP\"\)].address} From within the resulting shell, SSH into the Windows node and run the following command to create a password for the user: C:\> net user <username> * 1 1 Specify the cloud provider user name, such as Administrator for AWS or capi for Azure. You can now remotely access the Windows node at localhost:2020 using an RDP client. 7.10.4. Collecting Kubernetes node logs for Windows containers Windows container logging works differently from Linux container logging; the Kubernetes node logs for Windows workloads are streamed to the C:\var\logs directory by default. Therefore, you must gather the Windows node logs from that directory. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You have created a Windows compute machine set. Procedure To view the logs under all directories in C:\var\logs , run the following command: USD oc adm node-logs -l kubernetes.io/os=windows --path= \ /ip-10-0-138-252.us-east-2.compute.internal containers \ /ip-10-0-138-252.us-east-2.compute.internal hybrid-overlay \ /ip-10-0-138-252.us-east-2.compute.internal kube-proxy \ /ip-10-0-138-252.us-east-2.compute.internal kubelet \ /ip-10-0-138-252.us-east-2.compute.internal pods You can now list files in the directories using the same command and view the individual log files. For example, to view the kubelet logs, run the following command: USD oc adm node-logs -l kubernetes.io/os=windows --path=/kubelet/kubelet.log 7.10.5. Collecting Windows application event logs The Get-WinEvent shim on the kubelet logs endpoint can be used to collect application event logs from Windows machines. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You have created a Windows compute machine set. Procedure To view logs from all applications logging to the event logs on the Windows machine, run: USD oc adm node-logs -l kubernetes.io/os=windows --path=journal The same command is executed when collecting logs with oc adm must-gather . Other Windows application logs from the event log can also be collected by specifying the respective service with a -u flag. For example, you can run the following command to collect logs for the docker runtime service: USD oc adm node-logs -l kubernetes.io/os=windows --path=journal -u docker 7.10.6. Collecting Docker logs for Windows containers The Windows Docker service does not stream its logs to stdout, but instead, logs to the event log for Windows. You can view the Docker event logs to investigate issues you think might be caused by the Windows Docker service. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You have created a Windows compute machine set. Procedure SSH into the Windows node and enter PowerShell: C:\> powershell View the Docker logs by running the following command: C:\> Get-EventLog -LogName Application -Source Docker 7.10.7. Additional resources Containers on Windows troubleshooting Troubleshoot host and container image mismatches Docker for Windows troubleshooting Common Kubernetes problems with Windows 7.11. Investigating monitoring issues OpenShift Container Platform includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. In OpenShift Container Platform 4.15, cluster administrators can optionally enable monitoring for user-defined projects. Use these procedures if the following issues occur: Your own metrics are unavailable. Prometheus is consuming a lot of disk space. The KubePersistentVolumeFillingUp alert is firing for Prometheus. 7.11.1. Investigating why user-defined project metrics are unavailable ServiceMonitor resources enable you to determine how to use the metrics exposed by a service in user-defined projects. Follow the steps outlined in this procedure if you have created a ServiceMonitor resource but cannot see any corresponding metrics in the Metrics UI. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have enabled and configured monitoring for user-defined projects. You have created a ServiceMonitor resource. Procedure Check that the corresponding labels match in the service and ServiceMonitor resource configurations. Obtain the label defined in the service. The following example queries the prometheus-example-app service in the ns1 project: USD oc -n ns1 get service prometheus-example-app -o yaml Example output labels: app: prometheus-example-app Check that the matchLabels definition in the ServiceMonitor resource configuration matches the label output in the preceding step. The following example queries the prometheus-example-monitor service monitor in the ns1 project: USD oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml Example output apiVersion: v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app Note You can check service and ServiceMonitor resource labels as a developer with view permissions for the project. Inspect the logs for the Prometheus Operator in the openshift-user-workload-monitoring project. List the pods in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get pods Example output NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m Obtain the logs from the prometheus-operator container in the prometheus-operator pod. In the following example, the pod is called prometheus-operator-776fcbbd56-2nbfm : USD oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator If there is a issue with the service monitor, the logs might include an error similar to this example: level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg="skipping servicemonitor" error="it accesses file system via bearer token file which Prometheus specification prohibits" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload Review the target status for your endpoint on the Metrics targets page in the OpenShift Container Platform web console UI. Log in to the OpenShift Container Platform web console and navigate to Observe Targets in the Administrator perspective. Locate the metrics endpoint in the list, and review the status of the target in the Status column. If the Status is Down , click the URL for the endpoint to view more information on the Target Details page for that metrics target. Configure debug level logging for the Prometheus Operator in the openshift-user-workload-monitoring project. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add logLevel: debug for prometheusOperator under data/config.yaml to set the log level to debug : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug # ... Save the file to apply the changes. The affected prometheus-operator pod is automatically redeployed. Confirm that the debug log-level has been applied to the prometheus-operator deployment in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep "log-level" Example output - --log-level=debug Debug level logging will show all calls made by the Prometheus Operator. Check that the prometheus-operator pod is running: USD oc -n openshift-user-workload-monitoring get pods Note If an unrecognized Prometheus Operator loglevel value is included in the config map, the prometheus-operator pod might not restart successfully. Review the debug logs to see if the Prometheus Operator is using the ServiceMonitor resource. Review the logs for other related errors. Additional resources Enabling monitoring for user-defined projects See Specifying how a service is monitored for details on how to create a service monitor or pod monitor See Getting detailed information about a metrics target 7.11.2. Determining why Prometheus is consuming a lot of disk space Developers can create labels to define attributes for metrics in the form of key-value pairs. The number of potential key-value pairs corresponds to the number of possible values for an attribute. An attribute that has an unlimited number of potential values is called an unbound attribute. For example, a customer_id attribute is unbound because it has an infinite number of possible values. Every assigned key-value pair has a unique time series. The use of many unbound attributes in labels can result in an exponential increase in the number of time series created. This can impact Prometheus performance and can consume a lot of disk space. You can use the following measures when Prometheus consumes a lot of disk: Check the time series database (TSDB) status using the Prometheus HTTP API for more information about which labels are creating the most time series data. Doing so requires cluster administrator privileges. Check the number of scrape samples that are being collected. Reduce the number of unique time series that are created by reducing the number of unbound attributes that are assigned to user-defined metrics. Note Using attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations. Enforce limits on the number of samples that can be scraped across user-defined projects. This requires cluster administrator privileges. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure In the Administrator perspective, navigate to Observe Metrics . Enter a Prometheus Query Language (PromQL) query in the Expression field. The following example queries help to identify high cardinality metrics that might result in high disk space consumption: By running the following query, you can identify the ten jobs that have the highest number of scrape samples: topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling))) By running the following query, you can pinpoint time series churn by identifying the ten jobs that have created the most time series data in the last hour: topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h]))) Investigate the number of unbound label values assigned to metrics with higher than expected scrape sample counts: If the metrics relate to a user-defined project , review the metrics key-value pairs assigned to your workload. These are implemented through Prometheus client libraries at the application level. Try to limit the number of unbound attributes referenced in your labels. If the metrics relate to a core OpenShift Container Platform project , create a Red Hat support case on the Red Hat Customer Portal . Review the TSDB status using the Prometheus HTTP API by following these steps when logged in as a cluster administrator: Get the Prometheus API route URL by running the following command: USD HOST=USD(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath={.status.ingress[].host}) Extract an authentication token by running the following command: USD TOKEN=USD(oc whoami -t) Query the TSDB status for Prometheus by running the following command: USD curl -H "Authorization: Bearer USDTOKEN" -k "https://USDHOST/api/v1/status/tsdb" Example output "status": "success","data":{"headStats":{"numSeries":507473, "numLabelPairs":19832,"chunkCount":946298,"minTime":1712253600010, "maxTime":1712257935346},"seriesCountByMetricName": [{"name":"etcd_request_duration_seconds_bucket","value":51840}, {"name":"apiserver_request_sli_duration_seconds_bucket","value":47718}, ... Additional resources Setting a scrape sample limit for user-defined projects 7.11.3. Resolving the KubePersistentVolumeFillingUp alert firing for Prometheus As a cluster administrator, you can resolve the KubePersistentVolumeFillingUp alert being triggered for Prometheus. The critical alert fires when a persistent volume (PV) claimed by a prometheus-k8s-* pod in the openshift-monitoring project has less than 3% total space remaining. This can cause Prometheus to function abnormally. Note There are two KubePersistentVolumeFillingUp alerts: Critical alert : The alert with the severity="critical" label is triggered when the mounted PV has less than 3% total space remaining. Warning alert : The alert with the severity="warning" label is triggered when the mounted PV has less than 15% total space remaining and is expected to fill up within four days. To address this issue, you can remove Prometheus time-series database (TSDB) blocks to create more space for the PV. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure List the size of all TSDB blocks, sorted from oldest to newest, by running the following command: USD oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \ 1 -c prometheus --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \ 2 -o jsonpath='{.spec.containers[?(@.name=="prometheus")].image}') \ -- sh -c 'cd /prometheus/;du -hs USD(ls -dt */ | grep -Eo "[0-9|A-Z]{26}")' 1 2 Replace <prometheus_k8s_pod_name> with the pod mentioned in the KubePersistentVolumeFillingUp alert description. Example output 308M 01HVKMPKQWZYWS8WVDAYQHNMW6 52M 01HVK64DTDA81799TBR9QDECEZ 102M 01HVK64DS7TRZRWF2756KHST5X 140M 01HVJS59K11FBVAPVY57K88Z11 90M 01HVH2A5Z58SKT810EM6B9AT50 152M 01HV8ZDVQMX41MKCN84S32RRZ1 354M 01HV6Q2N26BK63G4RYTST71FBF 156M 01HV664H9J9Z1FTZD73RD1563E 216M 01HTHXB60A7F239HN7S2TENPNS 104M 01HTHMGRXGS0WXA3WATRXHR36B Identify which and how many blocks could be removed, then remove the blocks. The following example command removes the three oldest Prometheus TSDB blocks from the prometheus-k8s-0 pod: USD oc debug prometheus-k8s-0 -n openshift-monitoring \ -c prometheus --image=USD(oc get po -n openshift-monitoring prometheus-k8s-0 \ -o jsonpath='{.spec.containers[?(@.name=="prometheus")].image}') \ -- sh -c 'ls -latr /prometheus/ | egrep -o "[0-9|A-Z]{26}" | head -3 | \ while read BLOCK; do rm -r /prometheus/USDBLOCK; done' Verify the usage of the mounted PV and ensure there is enough space available by running the following command: USD oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \ 1 --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \ 2 -o jsonpath='{.spec.containers[?(@.name=="prometheus")].image}') -- df -h /prometheus/ 1 2 Replace <prometheus_k8s_pod_name> with the pod mentioned in the KubePersistentVolumeFillingUp alert description. The following example output shows the mounted PV claimed by the prometheus-k8s-0 pod that has 63% of space remaining: Example output Starting pod/prometheus-k8s-0-debug-j82w4 ... Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p4 40G 15G 40G 37% /prometheus Removing debug pod ... 7.12. Diagnosing OpenShift CLI ( oc ) issues 7.12.1. Understanding OpenShift CLI ( oc ) log levels With the OpenShift CLI ( oc ), you can create applications and manage OpenShift Container Platform projects from a terminal. If oc command-specific issues arise, increase the oc log level to output API request, API response, and curl request details generated by the command. This provides a granular view of a particular oc command's underlying operation, which in turn might provide insight into the nature of a failure. oc log levels range from 1 to 10. The following table provides a list of oc log levels, along with their descriptions. Table 7.4. OpenShift CLI (oc) log levels Log level Description 1 to 5 No additional logging to stderr. 6 Log API requests to stderr. 7 Log API requests and headers to stderr. 8 Log API requests, headers, and body, plus API response headers and body to stderr. 9 Log API requests, headers, and body, API response headers and body, plus curl requests to stderr. 10 Log API requests, headers, and body, API response headers and body, plus curl requests to stderr, in verbose detail. 7.12.2. Specifying OpenShift CLI ( oc ) log levels You can investigate OpenShift CLI ( oc ) issues by increasing the command's log level. The OpenShift Container Platform user's current session token is typically included in logged curl requests where required. You can also obtain the current user's session token manually, for use when testing aspects of an oc command's underlying process step-by-step. Prerequisites Install the OpenShift CLI ( oc ). Procedure Specify the oc log level when running an oc command: USD oc <command> --loglevel <log_level> where: <command> Specifies the command you are running. <log_level> Specifies the log level to apply to the command. To obtain the current user's session token, run the following command: USD oc whoami -t Example output sha256~RCV3Qcn7H-OEfqCGVI0CvnZ6... | [
"ssh <user_name>@<load_balancer> systemctl status haproxy",
"ssh <user_name>@<load_balancer> netstat -nltupe | grep -E ':80|:443|:6443|:22623'",
"ssh <user_name>@<load_balancer> ss -nltupe | grep -E ':80|:443|:6443|:22623'",
"dig <wildcard_fqdn> @<dns_server>",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete --log-level debug 1",
"./openshift-install create ignition-configs --dir=./install_dir",
"tail -f ~/<installation_directory>/.openshift_install.log",
"ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service",
"oc adm node-logs --role=master -u kubelet",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service",
"oc adm node-logs --role=master -u crio",
"ssh [email protected]_name.sub_domain.domain journalctl -b -f -u crio.service",
"curl -I http://<http_server_fqdn>:<port>/bootstrap.ign 1",
"grep -is 'bootstrap.ign' /var/log/httpd/access_log",
"ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service",
"ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done'",
"curl -I http://<http_server_fqdn>:<port>/master.ign 1",
"grep -is 'master.ign' /var/log/httpd/access_log",
"oc get nodes",
"oc describe node <master_node>",
"oc get daemonsets -n openshift-sdn",
"oc get pods -n openshift-sdn",
"oc logs <sdn_pod> -n openshift-sdn",
"oc get network.config.openshift.io cluster -o yaml",
"./openshift-install create manifests",
"oc get pods -n openshift-network-operator",
"oc logs pod/<network_operator_pod_name> -n openshift-network-operator",
"oc adm node-logs --role=master -u kubelet",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service",
"oc adm node-logs --role=master -u crio",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service",
"oc adm node-logs --role=master --path=openshift-apiserver",
"oc adm node-logs --role=master --path=openshift-apiserver/audit.log",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps -a",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"curl https://api-int.<cluster_name>:22623/config/master",
"dig api-int.<cluster_name> @<dns_server>",
"dig -x <load_balancer_mco_ip_address> @<dns_server>",
"ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/master",
"ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking",
"openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text",
"oc get pods -n openshift-etcd",
"oc get pods -n openshift-etcd-operator",
"oc describe pod/<pod_name> -n <namespace>",
"oc logs pod/<pod_name> -n <namespace>",
"oc logs pod/<pod_name> -c <container_name> -n <namespace>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods --name=etcd-",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps | grep '<pod_id>'",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"oc adm node-logs --role=master -u kubelet",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service",
"oc adm node-logs --role=master -u kubelet | grep -is 'x509: certificate has expired'",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service | grep -is 'x509: certificate has expired'",
"curl -I http://<http_server_fqdn>:<port>/worker.ign 1",
"grep -is 'worker.ign' /var/log/httpd/access_log",
"oc get nodes",
"oc describe node <worker_node>",
"oc get pods -n openshift-machine-api",
"oc describe pod/<machine_api_operator_pod_name> -n openshift-machine-api",
"oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c machine-api-operator",
"oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c kube-rbac-proxy",
"oc adm node-logs --role=worker -u kubelet",
"ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service",
"oc adm node-logs --role=worker -u crio",
"ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service",
"oc adm node-logs --role=worker --path=sssd",
"oc adm node-logs --role=worker --path=sssd/sssd.log",
"ssh core@<worker-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/sssd/sssd.log",
"ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl ps -a",
"ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"curl https://api-int.<cluster_name>:22623/config/worker",
"dig api-int.<cluster_name> @<dns_server>",
"dig -x <load_balancer_mco_ip_address> @<dns_server>",
"ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/worker",
"ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking",
"openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text",
"oc get clusteroperators",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 1 csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending 2 csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc describe clusteroperator <operator_name>",
"oc get pods -n <operator_namespace>",
"oc describe pod/<operator_pod_name> -n <operator_namespace>",
"oc logs pod/<operator_pod_name> -n <operator_namespace>",
"oc get pod -o \"jsonpath={range .status.containerStatuses[*]}{.name}{'\\t'}{.state}{'\\t'}{.image}{'\\n'}{end}\" <operator_pod_name> -n <operator_namespace>",
"oc adm release info <image_path>:<tag> --commits",
"./openshift-install gather bootstrap --dir <installation_directory> 1",
"./openshift-install gather bootstrap --dir <installation_directory> \\ 1 --bootstrap <bootstrap_address> \\ 2 --master <master_1_address> \\ 3 --master <master_2_address> \\ 4 --master <master_3_address> 5",
"INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here \"<installation_directory>/log-bundle-<timestamp>.tar.gz\"",
"oc get nodes",
"oc adm top nodes",
"oc adm top node my-node",
"oc debug node/my-node",
"chroot /host",
"systemctl is-active kubelet",
"systemctl status kubelet",
"oc adm node-logs --role=master -u kubelet 1",
"oc adm node-logs --role=master --path=openshift-apiserver",
"oc adm node-logs --role=master --path=openshift-apiserver/audit.log",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log",
"oc debug node/my-node",
"chroot /host",
"systemctl is-active crio",
"systemctl status crio.service",
"oc adm node-logs --role=master -u crio",
"oc adm node-logs <node_name> -u crio",
"ssh core@<node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service",
"Failed to create pod sandbox: rpc error: code = Unknown desc = failed to mount container XXX: error recreating the missing symlinks: error reading name of symlink for XXX: open /var/lib/containers/storage/overlay/XXX/link: no such file or directory",
"can't stat lower layer ... because it does not exist. Going through storage to recreate the missing symlinks.",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --ignore-daemonsets --delete-emptydir-data",
"ssh [email protected] sudo -i",
"systemctl stop kubelet",
".. for pod in USD(crictl pods -q); do if [[ \"USD(crictl inspectp USDpod | jq -r .status.linux.namespaces.options.network)\" != \"NODE\" ]]; then crictl rmp -f USDpod; fi; done",
"crictl rmp -fa",
"systemctl stop crio",
"crio wipe -f",
"systemctl start crio systemctl start kubelet",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready, SchedulingDisabled master 133m v1.28.5",
"oc adm uncordon <node_name>",
"NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready master 133m v1.28.5",
"rpm-ostree kargs --append='crashkernel=256M'",
"systemctl enable kdump.service",
"systemctl reboot",
"variant: openshift version: 4.15.0 metadata: name: 99-worker-kdump 1 labels: machineconfiguration.openshift.io/role: worker 2 openshift: kernel_arguments: 3 - crashkernel=256M storage: files: - path: /etc/kdump.conf 4 mode: 0644 overwrite: true contents: inline: | path /var/crash core_collector makedumpfile -l --message-level 7 -d 31 - path: /etc/sysconfig/kdump 5 mode: 0644 overwrite: true contents: inline: | KDUMP_COMMANDLINE_REMOVE=\"hugepages hugepagesz slub_debug quiet log_buf_len swiotlb\" KDUMP_COMMANDLINE_APPEND=\"irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 rootflags=nofail acpi_no_memhotplug transparent_hugepage=never nokaslr novmcoredd hest_disable\" 6 KEXEC_ARGS=\"-s\" KDUMP_IMG=\"vmlinuz\" systemd: units: - name: kdump.service enabled: true",
"nfs server.example.com:/export/cores core_collector makedumpfile -l --message-level 7 -d 31 extra_modules nfs",
"butane 99-worker-kdump.bu -o 99-worker-kdump.yaml",
"oc create -f 99-worker-kdump.yaml",
"systemctl --failed",
"journalctl -u <unit>.service",
"NODEIP_HINT=192.0.2.1",
"echo -n 'NODEIP_HINT=192.0.2.1' | base64 -w0",
"Tk9ERUlQX0hJTlQ9MTkyLjAuMCxxxx==",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-nodeip-hint-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<encoded_content> 1 mode: 0644 overwrite: true path: /etc/default/nodeip-configuration",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-nodeip-hint-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<encoded_content> 1 mode: 0644 overwrite: true path: /etc/default/nodeip-configuration",
"[connection] id=eno1 type=ethernet interface-name=eno1 master=bond1 slave-type=bond autoconnect=true autoconnect-priority=20",
"[connection] id=eno2 type=ethernet interface-name=eno2 master=bond1 slave-type=bond autoconnect=true autoconnect-priority=20",
"[connection] id=bond1 type=bond interface-name=bond1 autoconnect=true connection.autoconnect-slaves=1 autoconnect-priority=20 [bond] mode=802.3ad miimon=100 xmit_hash_policy=\"layer3+4\" [ipv4] method=auto",
"base64 <directory_path>/en01.config",
"base64 <directory_path>/eno2.config",
"base64 <directory_path>/bond1.config",
"export ROLE=<machine_role>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: USD{worker} name: 12-USD{ROLE}-sec-bridge-cni spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:;base64,<base-64-encoded-contents-for-bond1.conf> path: /etc/NetworkManager/system-connections/bond1.nmconnection filesystem: root mode: 0600 - contents: source: data:;base64,<base-64-encoded-contents-for-eno1.conf> path: /etc/NetworkManager/system-connections/eno1.nmconnection filesystem: root mode: 0600 - contents: source: data:;base64,<base-64-encoded-contents-for-eno2.conf> path: /etc/NetworkManager/system-connections/eno2.nmconnection filesystem: root mode: 0600",
"oc create -f <machine_config_file_name>",
"bond1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: USD{worker} name: 12-worker-extra-bridge spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/ovnk/extra_bridge mode: 0420 overwrite: true contents: source: data:text/plain;charset=utf-8,bond1 filesystem: root",
"oc create -f <machine_config_file_name>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: USD{worker} name: 12-worker-br-ex-override spec: config: ignition: version: 3.2.0 storage: files: - path: /var/lib/ovnk/iface_default_hint mode: 0420 overwrite: true contents: source: data:text/plain;charset=utf-8,bond0 1 filesystem: root",
"oc create -f <machine_config_file_name>",
"oc get nodes -o json | grep --color exgw-ip-addresses",
"\"k8s.ovn.org/l3-gateway-config\": \\\"exgw-ip-address\\\":\\\"172.xx.xx.yy/24\\\",\\\"next-hops\\\":[\\\"xx.xx.xx.xx\\\"],",
"oc debug node/<node_name> -- chroot /host sh -c \"ip a | grep mtu | grep br-ex\"",
"Starting pod/worker-1-debug To use host binaries, run `chroot /host` 5: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 6: br-ex1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000",
"oc debug node/<node_name> -- chroot /host sh -c \"ip a | grep -A1 -E 'br-ex|bond0'",
"Starting pod/worker-1-debug To use host binaries, run `chroot /host` sh-5.1# ip a | grep -A1 -E 'br-ex|bond0' 2: bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP group default qlen 1000 link/ether fa:16:3e:47:99:98 brd ff:ff:ff:ff:ff:ff -- 5: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether fa:16:3e:47:99:98 brd ff:ff:ff:ff:ff:ff inet 10.xx.xx.xx/21 brd 10.xx.xx.255 scope global dynamic noprefixroute br-ex",
"E0514 12:47:17.998892 2281 daemon.go:1350] content mismatch for file /etc/systemd/system/ovs-vswitchd.service: [Unit]",
"oc debug node/<node_name>",
"chroot /host",
"ovs-appctl vlog/list",
"console syslog file ------- ------ ------ backtrace OFF INFO INFO bfd OFF INFO INFO bond OFF INFO INFO bridge OFF INFO INFO bundle OFF INFO INFO bundles OFF INFO INFO cfm OFF INFO INFO collectors OFF INFO INFO command_line OFF INFO INFO connmgr OFF INFO INFO conntrack OFF INFO INFO conntrack_tp OFF INFO INFO coverage OFF INFO INFO ct_dpif OFF INFO INFO daemon OFF INFO INFO daemon_unix OFF INFO INFO dns_resolve OFF INFO INFO dpdk OFF INFO INFO",
"Restart=always ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /var/lib/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /etc/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /run/openvswitch' ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg",
"systemctl daemon-reload",
"systemctl restart ovs-vswitchd",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master 1 name: 99-change-ovs-loglevel spec: config: ignition: version: 3.2.0 systemd: units: - dropins: - contents: | [Service] ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg 2 ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg name: 20-ovs-vswitchd-restart.conf name: ovs-vswitchd.service",
"oc apply -f 99-change-ovs-loglevel.yaml",
"oc adm node-logs <node_name> -u ovs-vswitchd",
"journalctl -b -f -u ovs-vswitchd.service",
"oc get subs -n <operator_namespace>",
"oc describe sub <subscription_name> -n <operator_namespace>",
"Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy",
"oc get catalogsources -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m",
"oc describe catalogsource example-catalog -n openshift-marketplace",
"Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {\"effect\": \"PreferredDuringScheduling\"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m",
"oc describe pod example-catalog-bwt8z -n openshift-marketplace",
"Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull",
"oc get clusteroperators",
"oc get pod -n <operator_namespace>",
"oc describe pod <operator_pod_name> -n <operator_namespace>",
"oc debug node/my-node",
"chroot /host",
"crictl ps",
"crictl ps --name network-operator",
"oc get pods -n <operator_namespace>",
"oc logs pod/<pod_name> -n <operator_namespace>",
"oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: true 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: false 1",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/master",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/worker",
"oc get machineconfigpool/master --template='{{.spec.paused}}'",
"oc get machineconfigpool/worker --template='{{.spec.paused}}'",
"true",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-33cf0a1254318755d7b48002c597bf91 True False worker rendered-worker-e405a5bdb0db1295acea08bcca33fa60 False False",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/master",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/worker",
"oc get machineconfigpool/master --template='{{.spec.paused}}'",
"oc get machineconfigpool/worker --template='{{.spec.paused}}'",
"false",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True",
"ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"",
"rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host",
"oc get sub,csv -n <namespace>",
"NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded",
"oc delete subscription <subscription_name> -n <namespace>",
"oc delete csv <csv_name> -n <namespace>",
"oc get job,configmap -n openshift-marketplace",
"NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s",
"oc delete job <job_name> -n openshift-marketplace",
"oc delete configmap <configmap_name> -n openshift-marketplace",
"oc get sub,csv,installplan -n <namespace>",
"message: 'Failed to delete all resource types, 1 remaining: Internal error occurred: error resolving resource'",
"oc get namespaces",
"operator-ns-1 Terminating",
"oc get crds",
"oc delete crd <crd_name>",
"oc get EtcdCluster -n <namespace_name>",
"oc get EtcdCluster --all-namespaces",
"oc delete <cr_name> <cr_instance_name> -n <namespace_name>",
"oc get namespace <namespace_name>",
"oc get sub,csv,installplan -n <namespace>",
"oc project <project_name>",
"oc get pods",
"oc status",
"skopeo inspect docker://<image_reference>",
"oc edit deployment/my-deployment",
"oc get pods -w",
"oc get events",
"oc logs <pod_name>",
"oc logs <pod_name> -c <container_name>",
"oc exec <pod_name> -- ls -alh /var/log",
"total 124K drwxr-xr-x. 1 root root 33 Aug 11 11:23 . drwxr-xr-x. 1 root root 28 Sep 6 2022 .. -rw-rw----. 1 root utmp 0 Jul 10 10:31 btmp -rw-r--r--. 1 root root 33K Jul 17 10:07 dnf.librepo.log -rw-r--r--. 1 root root 69K Jul 17 10:07 dnf.log -rw-r--r--. 1 root root 8.8K Jul 17 10:07 dnf.rpm.log -rw-r--r--. 1 root root 480 Jul 17 10:07 hawkey.log -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 lastlog drwx------. 2 root root 23 Aug 11 11:14 openshift-apiserver drwx------. 2 root root 6 Jul 10 10:31 private drwxr-xr-x. 1 root root 22 Mar 9 08:05 rhsm -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 wtmp",
"oc exec <pod_name> cat /var/log/<path_to_log>",
"2023-07-10T10:29:38+0000 INFO --- logging initialized --- 2023-07-10T10:29:38+0000 DDEBUG timer: config: 13 ms 2023-07-10T10:29:38+0000 DEBUG Loaded plugins: builddep, changelog, config-manager, copr, debug, debuginfo-install, download, generate_completion_cache, groups-manager, needs-restarting, playground, product-id, repoclosure, repodiff, repograph, repomanage, reposync, subscription-manager, uploadprofile 2023-07-10T10:29:38+0000 INFO Updating Subscription Management repositories. 2023-07-10T10:29:38+0000 INFO Unable to read consumer identity 2023-07-10T10:29:38+0000 INFO Subscription Manager is operating in container mode. 2023-07-10T10:29:38+0000 INFO",
"oc exec <pod_name> -c <container_name> ls /var/log",
"oc exec <pod_name> -c <container_name> cat /var/log/<path_to_log>",
"oc project <namespace>",
"oc rsh <pod_name> 1",
"oc rsh -c <container_name> pod/<pod_name>",
"oc port-forward <pod_name> <host_port>:<pod_port> 1",
"oc get deployment -n <project_name>",
"oc debug deployment/my-deployment --as-root -n <project_name>",
"oc get deploymentconfigs -n <project_name>",
"oc debug deploymentconfig/my-deployment-configuration --as-root -n <project_name>",
"oc cp <local_path> <pod_name>:/<path> -c <container_name> 1",
"oc cp <pod_name>:/<path> -c <container_name> <local_path> 1",
"oc get pods -w 1",
"oc logs -f pod/<application_name>-<build_number>-build",
"oc logs -f pod/<application_name>-<build_number>-deploy",
"oc logs -f pod/<application_name>-<build_number>-<random_string>",
"oc describe pod/my-app-1-akdlg",
"oc logs -f pod/my-app-1-akdlg",
"oc exec my-app-1-akdlg -- cat /var/log/my-application.log",
"oc debug dc/my-deployment-configuration --as-root -- cat /var/log/my-application.log",
"oc exec -it my-app-1-akdlg /bin/bash",
"oc debug node/my-cluster-node",
"chroot /host",
"crictl ps",
"crictl inspect a7fe32346b120 --output yaml | grep 'pid:' | awk '{print USD2}'",
"nsenter -n -t 31150 -- ip ad",
"Unable to attach or mount volumes: unmounted volumes=[sso-mysql-pvol], unattached volumes=[sso-mysql-pvol default-token-x4rzc]: timed out waiting for the condition Multi-Attach error for volume \"pvc-8837384d-69d7-40b2-b2e6-5df86943eef9\" Volume is already used by pod(s) sso-mysql-1-ns6b4",
"oc delete pod <old_pod> --force=true --grace-period=0",
"oc logs -f deployment/windows-machine-config-operator -n openshift-windows-machine-config-operator",
"ssh -t -o StrictHostKeyChecking=no -o ProxyCommand='ssh -A -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -W %h:%p core@USD(oc get service --all-namespaces -l run=ssh-bastion -o go-template=\"{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}\")' <username>@<windows_node_internal_ip> 1 2",
"oc get nodes <node_name> -o jsonpath={.status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address}",
"ssh -L 2020:<windows_node_internal_ip>:3389 \\ 1 core@USD(oc get service --all-namespaces -l run=ssh-bastion -o go-template=\"{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}\")",
"oc get nodes <node_name> -o jsonpath={.status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address}",
"C:\\> net user <username> * 1",
"oc adm node-logs -l kubernetes.io/os=windows --path= /ip-10-0-138-252.us-east-2.compute.internal containers /ip-10-0-138-252.us-east-2.compute.internal hybrid-overlay /ip-10-0-138-252.us-east-2.compute.internal kube-proxy /ip-10-0-138-252.us-east-2.compute.internal kubelet /ip-10-0-138-252.us-east-2.compute.internal pods",
"oc adm node-logs -l kubernetes.io/os=windows --path=/kubelet/kubelet.log",
"oc adm node-logs -l kubernetes.io/os=windows --path=journal",
"oc adm node-logs -l kubernetes.io/os=windows --path=journal -u docker",
"C:\\> powershell",
"C:\\> Get-EventLog -LogName Application -Source Docker",
"oc -n ns1 get service prometheus-example-app -o yaml",
"labels: app: prometheus-example-app",
"oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml",
"apiVersion: v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app",
"oc -n openshift-user-workload-monitoring get pods",
"NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m",
"oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator",
"level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg=\"skipping servicemonitor\" error=\"it accesses file system via bearer token file which Prometheus specification prohibits\" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug",
"oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"",
"- --log-level=debug",
"oc -n openshift-user-workload-monitoring get pods",
"topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling)))",
"topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h])))",
"HOST=USD(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath={.status.ingress[].host})",
"TOKEN=USD(oc whoami -t)",
"curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v1/status/tsdb\"",
"\"status\": \"success\",\"data\":{\"headStats\":{\"numSeries\":507473, \"numLabelPairs\":19832,\"chunkCount\":946298,\"minTime\":1712253600010, \"maxTime\":1712257935346},\"seriesCountByMetricName\": [{\"name\":\"etcd_request_duration_seconds_bucket\",\"value\":51840}, {\"name\":\"apiserver_request_sli_duration_seconds_bucket\",\"value\":47718},",
"oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \\ 1 -c prometheus --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \\ 2 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- sh -c 'cd /prometheus/;du -hs USD(ls -dt */ | grep -Eo \"[0-9|A-Z]{26}\")'",
"308M 01HVKMPKQWZYWS8WVDAYQHNMW6 52M 01HVK64DTDA81799TBR9QDECEZ 102M 01HVK64DS7TRZRWF2756KHST5X 140M 01HVJS59K11FBVAPVY57K88Z11 90M 01HVH2A5Z58SKT810EM6B9AT50 152M 01HV8ZDVQMX41MKCN84S32RRZ1 354M 01HV6Q2N26BK63G4RYTST71FBF 156M 01HV664H9J9Z1FTZD73RD1563E 216M 01HTHXB60A7F239HN7S2TENPNS 104M 01HTHMGRXGS0WXA3WATRXHR36B",
"oc debug prometheus-k8s-0 -n openshift-monitoring -c prometheus --image=USD(oc get po -n openshift-monitoring prometheus-k8s-0 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- sh -c 'ls -latr /prometheus/ | egrep -o \"[0-9|A-Z]{26}\" | head -3 | while read BLOCK; do rm -r /prometheus/USDBLOCK; done'",
"oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \\ 1 --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \\ 2 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- df -h /prometheus/",
"Starting pod/prometheus-k8s-0-debug-j82w4 Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p4 40G 15G 40G 37% /prometheus Removing debug pod",
"oc <command> --loglevel <log_level>",
"oc whoami -t",
"sha256~RCV3Qcn7H-OEfqCGVI0CvnZ6"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/support/troubleshooting |
Chapter 19. Apache CXF Logging | Chapter 19. Apache CXF Logging Abstract This chapter describes how to configure logging in the Apache CXF runtime. 19.1. Overview of Apache CXF Logging Overview Apache CXF uses the Java logging utility, java.util.logging . Logging is configured in a logging configuration file that is written using the standard java.util.Properties format. To run logging on an application, you can specify logging programmatically or by defining a property at the command that points to the logging configuration file when you start the application. Default properties file Apache CXF comes with a default logging.properties file, which is located in your InstallDir /etc directory. This file configures both the output destination for the log messages and the message level that is published. The default configuration sets the loggers to print message flagged with the WARNING level to the console. You can either use the default file without changing any of the configuration settings or you can change the configuration settings to suit your specific application. Logging feature Apache CXF includes a logging feature that can be plugged into your client or your service to enable logging. Example 19.1, "Configuration for Enabling Logging" shows the configuration to enable the logging feature. Example 19.1. Configuration for Enabling Logging For more information, see Section 19.6, "Logging Message Content" . Where to begin? To run a simple example of logging follow the instructions outlined in a Section 19.2, "Simple Example of Using Logging" . For more information on how logging works in Apache CXF, read this entire chapter. More information on java.util.logging The java.util.logging utility is one of the most widely used Java logging frameworks. There is a lot of information available online that describes how to use and extend this framework. As a starting point, however, the following documents gives a good overview of java.util.logging : http://download.oracle.com/javase/1.5.0/docs/guide/logging/overview.html http://download.oracle.com/javase/1.5.0/docs/api/java/util/logging/package-summary.html 19.2. Simple Example of Using Logging Changing the log levels and output destination To change the log level and output destination of the log messages in the wsdl_first sample application, complete the following steps: Run the sample server as described in the Running the demo using java section of the README.txt file in the InstallDir /samples/wsdl_first directory. Note that the server start command specifies the default logging.properties file, as follows: Platform Command + Windows start java -Djava.util.logging.config.file=%CXF_HOME%\etc\logging.properties demo.hw.server.Server + UNIX java -Djava.util.logging.config.file=USDCXF_HOME/etc/logging.properties demo.hw.server.Server & + The default logging.properties file is located in the InstallDir /etc directory. It configures the Apache CXF loggers to print WARNING level log messages to the console. As a result, you see very little printed to the console. Stop the server as described in the README.txt file. Make a copy of the default logging.properties file, name it mylogging.properties file, and save it in the same directory as the default logging.properties file. Change the global logging level and the console logging levels in your mylogging.properties file to INFO by editing the following lines of configuration: Restart the server using the following command: Platform Command + Windows start java -Djava.util.logging.config.file=%CXF_HOME%\etc\mylogging.properties demo.hw.server.Server + UNIX java -Djava.util.logging.config.file=USDCXF_HOME/etc/mylogging.properties demo.hw.server.Server & + Because you configured the global logging and the console logger to log messages of level INFO , you see a lot more log messages printed to the console. 19.3. Default logging configuration file 19.3.1. Overview of Logging Configuration The default logging configuration file, logging.properties , is located in the InstallDir /etc directory. It configures the Apache CXF loggers to print WARNING level messages to the console. If this level of logging is suitable for your application, you do not have to make any changes to the file before using it. You can, however, change the level of detail in the log messages. For example, you can change whether log messages are sent to the console, to a file or to both. In addition, you can specify logging at the level of individual packages. Note This section discusses the configuration properties that appear in the default logging.properties file. There are, however, many other java.util.logging configuration properties that you can set. For more information on the java.util.logging API, see the java.util.logging javadoc at: http://download.oracle.com/javase/1.5/docs/api/java/util/logging/package-summary.html . 19.3.2. Configuring Logging Output Overview The Java logging utility, java.util.logging , uses handler classes to output log messages. Table 19.1, "Java.util.logging Handler Classes" shows the handlers that are configured in the default logging.properties file. Table 19.1. Java.util.logging Handler Classes Handler Class Outputs to ConsoleHandler Outputs log messages to the console FileHandler Outputs log messages to a file Important The handler classes must be on the system classpath in order to be installed by the Java VM when it starts. This is done when you set the Apache CXF environment. Configuring the console handler Example 19.2, "Configuring the Console Handler" shows the code for configuring the console logger. Example 19.2. Configuring the Console Handler The console handler also supports the configuration properties shown in Example 19.3, "Console Handler Properties" . Example 19.3. Console Handler Properties The configuration properties shown in Example 19.3, "Console Handler Properties" can be explained as follows: The console handler supports a separate log level configuration property. This allows you to limit the log messages printed to the console while the global logging setting can be different (see Section 19.3.3, "Configuring Logging Levels" ). The default setting is WARNING . Specifies the java.util.logging formatter class that the console handler class uses to format the log messages. The default setting is the java.util.logging.SimpleFormatter . Configuring the file handler Example 19.4, "Configuring the File Handler" shows code that configures the file handler. Example 19.4. Configuring the File Handler The file handler also supports the configuration properties shown in Example 19.5, "File Handler Configuration Properties" . Example 19.5. File Handler Configuration Properties The configuration properties shown in Example 19.5, "File Handler Configuration Properties" can be explained as follows: Specifies the location and pattern of the output file. The default setting is your home directory. Specifies, in bytes, the maximum amount that the logger writes to any one file. The default setting is 50000 . If you set it to zero, there is no limit on the amount that the logger writes to any one file. Specifies how many output files to cycle through. The default setting is 1 . Specifies the java.util.logging formatter class that the file handler class uses to format the log messages. The default setting is the java.util.logging.XMLFormatter . Configuring both the console handler and the file handler You can set the logging utility to output log messages to both the console and to a file by specifying the console handler and the file handler, separated by a comma, as shown in Configuring Both Console Logging and File . Configuring Both Console Logging and File 19.3.3. Configuring Logging Levels Logging levels The java.util.logging framework supports the following levels of logging, from the least verbose to the most verbose: SEVERE WARNING INFO CONFIG FINE FINER FINEST Configuring the global logging level To configure the types of event that are logged across all loggers, configure the global logging level as shown in Example 19.6, "Configuring Global Logging Levels" . Example 19.6. Configuring Global Logging Levels Configuring logging at an individual package The java.util.logging framework supports configuring logging at the level of an individual package. For example, the line of code shown in Example 19.7, "Configuring Logging at the Package Level" configures logging at a SEVERE level on classes in the com.xyz.foo package. Example 19.7. Configuring Logging at the Package Level 19.4. Enabling Logging at the Command Line Overview You can run the logging utility on an application by defining a java.util.logging.config.file property when you start the application. You can either specify the default logging.properties file or a logging.properties file that is unique to that application. Specifying the log configuration file on application To specify logging on application start-up add the flag shown in Example 19.8, "Flag to Start Logging on the Command Line" when starting the application. Example 19.8. Flag to Start Logging on the Command Line 19.5. Logging for Subsystems and Services Overview You can use the com.xyz.foo.level configuration property described in the section called "Configuring logging at an individual package" to set fine-grained logging for specified Apache CXF logging subsystems. Apache CXF logging subsystems Table 19.2, "Apache CXF Logging Subsystems" shows a list of available Apache CXF logging subsystems. Table 19.2. Apache CXF Logging Subsystems Subsystem Description org.apache.cxf.aegis Aegis binding org.apache.cxf.binding.coloc colocated binding org.apache.cxf.binding.http HTTP binding org.apache.cxf.binding.jbi JBI binding org.apache.cxf.binding.object Java Object binding org.apache.cxf.binding.soap SOAP binding org.apache.cxf.binding.xml XML binding org.apache.cxf.bus Apache CXF bus org.apache.cxf.configuration configuration framework org.apache.cxf.endpoint server and client endpoints org.apache.cxf.interceptor interceptors org.apache.cxf.jaxws Front-end for JAX-WS style message exchange, JAX-WS handler processing, and interceptors relating to JAX-WS and configuration org.apache.cxf.jbi JBI container integration classes org.apache.cxf.jca JCA container integration classes org.apache.cxf.js JavaScript front-end org.apache.cxf.transport.http HTTP transport org.apache.cxf.transport.https secure version of HTTP transport, using HTTPS org.apache.cxf.transport.jbi JBI transport org.apache.cxf.transport.jms JMS transport org.apache.cxf.transport.local transport implementation using local file system org.apache.cxf.transport.servlet HTTP transport and servlet implementation for loading JAX-WS endpoints into a servlet container org.apache.cxf.ws.addressing WS-Addressing implementation org.apache.cxf.ws.policy WS-Policy implementation org.apache.cxf.ws.rm WS-ReliableMessaging (WS-RM) implementation org.apache.cxf.ws.security.wss4j WSS4J security implementation Example The WS-Addressing sample is contained in the InstallDir /samples/ws_addressing directory. Logging is configured in the logging.properties file located in that directory. The relevant lines of configuration are shown in Example 19.9, "Configuring Logging for WS-Addressing" . Example 19.9. Configuring Logging for WS-Addressing The configuration in Example 19.9, "Configuring Logging for WS-Addressing" enables the snooping of log messages relating to WS-Addressing headers, and displays them to the console in a concise form. For information on running this sample, see the README.txt file located in the InstallDir /samples/ws_addressing directory. 19.6. Logging Message Content Overview You can log the content of the messages that are sent between a service and a consumer. For example, you might want to log the contents of SOAP messages that are being sent between a service and a consumer. Configuring message content logging To log the messages that are sent between a service and a consumer, and vice versa, complete the following steps: Add the logging feature to your endpoint's configuration. Add the logging feature to your consumer's configuration. Configure the logging system log INFO level messages. Adding the logging feature to an endpoint Add the logging feature your endpoint's configuration as shown in Example 19.10, "Adding Logging to Endpoint Configuration" . Example 19.10. Adding Logging to Endpoint Configuration The example XML shown in Example 19.10, "Adding Logging to Endpoint Configuration" enables the logging of SOAP messages. Adding the logging feature to a consumer Add the logging feature your client's configuration as shown in Example 19.11, "Adding Logging to Client Configuration" . Example 19.11. Adding Logging to Client Configuration The example XML shown in Example 19.11, "Adding Logging to Client Configuration" enables the logging of SOAP messages. Set logging to log INFO level messages Ensure that the logging.properties file associated with your service is configured to log INFO level messages, as shown in Example 19.12, "Setting the Logging Level to INFO" . Example 19.12. Setting the Logging Level to INFO Logging SOAP messages To see the logging of SOAP messages modify the wsdl_first sample application located in the InstallDir /samples/wsdl_first directory, as follows: Add the jaxws:features element shown in Example 19.13, "Endpoint Configuration for Logging SOAP Messages" to the cxf.xml configuration file located in the wsdl_first sample's directory: Example 19.13. Endpoint Configuration for Logging SOAP Messages The sample uses the default logging.properties file, which is located in the InstallDir /etc directory. Make a copy of this file and name it mylogging.properties . In the mylogging.properties file, change the logging levels to INFO by editing the .level and the java.util.logging.ConsoleHandler.level configuration properties as follows: Start the server using the new configuration settings in both the cxf.xml file and the mylogging.properties file as follows: Platform Command + Windows start java -Djava.util.logging.config.file=%CXF_HOME%\etc\mylogging.properties demo.hw.server.Server + UNIX java -Djava.util.logging.config.file=USDCXF_HOME/etc/mylogging.properties demo.hw.server.Server & + Start the hello world client using the following command: Platform Command + Windows java -Djava.util.logging.config.file=%CXF_HOME%\etc\mylogging.properties demo.hw.client.Client .\wsdl\hello_world.wsdl + UNIX java -Djava.util.logging.config.file=USDCXF_HOME/etc/mylogging.properties demo.hw.client.Client ./wsdl/hello_world.wsdl + The SOAP messages are logged to the console. | [
"<jaxws:endpoint...> <jaxws:features> <bean class=\"org.apache.cxf.feature.LoggingFeature\"/> </jaxws:features> </jaxws:endpoint>",
".level= INFO java.util.logging.ConsoleHandler.level = INFO",
"handlers= java.util.logging.ConsoleHandler",
"java.util.logging.ConsoleHandler.level = WARNING java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter",
"handlers= java.util.logging.FileHandler",
"java.util.logging.FileHandler.pattern = %h/java%u.log java.util.logging.FileHandler.limit = 50000 java.util.logging.FileHandler.count = 1 java.util.logging.FileHandler.formatter = java.util.logging.XMLFormatter",
"Logging",
"handlers= java.util.logging.FileHandler, java.util.logging.ConsoleHandler",
".level= WARNING",
"level",
"com.xyz.foo.level = SEVERE",
"start-up",
"-Djava.util.logging.config.file= myfile",
"java.util.logging.ConsoleHandler.formatter = demos.ws_addressing.common.ConciseFormatter org.apache.cxf.ws.addressing.soap.MAPCodec.level = INFO",
"<jaxws:endpoint ...> <jaxws:features> <bean class=\"org.apache.cxf.feature.LoggingFeature\"/> </jaxws:features> </jaxws:endpoint>",
"<jaxws:client ...> <jaxws:features> <bean class=\"org.apache.cxf.feature.LoggingFeature\"/> </jaxws:features> </jaxws:client>",
".level= INFO java.util.logging.ConsoleHandler.level = INFO",
"<jaxws:endpoint name=\"{http://apache.org/hello_world_soap_http}SoapPort\" createdFromAPI=\"true\"> <jaxws:properties> <entry key=\"schema-validation-enabled\" value=\"true\" /> </jaxws:properties> <jaxws:features> <bean class=\"org.apache.cxf.feature.LoggingFeature\"/> </jaxws:features> </jaxws:endpoint>",
".level= INFO java.util.logging.ConsoleHandler.level = INFO"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/CXFDeployLogging |
Chapter 13. Managing machines with the Cluster API | Chapter 13. Managing machines with the Cluster API 13.1. About the Cluster API Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The Cluster API is an upstream project that is integrated into OpenShift Container Platform as a Technology Preview for Amazon Web Services (AWS), Google Cloud Platform (GCP), and Red Hat OpenStack Platform (RHOSP). 13.1.1. Cluster API overview You can use the Cluster API to create and manage compute machine sets and compute machines in your OpenShift Container Platform cluster. This capability is in addition or an alternative to managing machines with the Machine API. For OpenShift Container Platform 4.15 clusters, you can use the Cluster API to perform node host provisioning management actions after the cluster installation finishes. This system enables an elastic, dynamic provisioning method on top of public or private cloud infrastructure. With the Cluster API Technology Preview, you can create compute machines and compute machine sets on OpenShift Container Platform clusters for supported providers. You can also explore the features that are enabled by this implementation that might not be available with the Machine API. 13.1.1.1. Cluster API benefits By using the Cluster API, OpenShift Container Platform users and developers gain the following advantages: The option to use upstream community Cluster API infrastructure providers that might not be supported by the Machine API. The opportunity to collaborate with third parties who maintain machine controllers for infrastructure providers. The ability to use the same set of Kubernetes tools for infrastructure management in OpenShift Container Platform. The ability to create compute machine sets by using the Cluster API that support features that are not available with the Machine API. 13.1.1.2. Cluster API limitations Using the Cluster API to manage machines is a Technology Preview feature and has the following limitations: To use this feature, you must enable the TechPreviewNoUpgrade feature set. Important Enabling this feature set cannot be undone and prevents minor version updates. Only AWS, Google Cloud Platform (GCP), and Red Hat OpenStack Platform (RHOSP) clusters can use the Cluster API. You must manually create the primary resources that the Cluster API requires. For more information, see "Getting started with the Cluster API". You cannot use the Cluster API to manage control plane machines. Migration of existing compute machine sets created by the Machine API to Cluster API compute machine sets is not supported. Full feature parity with the Machine API is not available. For clusters that use the Cluster API, OpenShift CLI ( oc ) commands prioritize Cluster API objects over Machine API objects. This behavior impacts any oc command that acts upon any object that is represented in both the Cluster API and the Machine API. For more information and a workaround for this issue, see "Referencing the intended objects when using the CLI" in the troubleshooting content. Additional resources Enabling features using feature gates Getting started with the Cluster API Referencing the intended objects when using the CLI 13.1.2. Cluster API architecture The OpenShift Container Platform integration of the upstream Cluster API is implemented and managed by the Cluster CAPI Operator. The Cluster CAPI Operator and its operands are provisioned in the openshift-cluster-api namespace, in contrast to the Machine API, which uses the openshift-machine-api namespace. 13.1.2.1. The Cluster CAPI Operator The Cluster CAPI Operator is an OpenShift Container Platform Operator that maintains the lifecycle of Cluster API resources. This Operator is responsible for all administrative tasks related to deploying the Cluster API project within an OpenShift Container Platform cluster. If a cluster is configured correctly to allow the use of the Cluster API, the Cluster CAPI Operator installs the Cluster API components on the cluster. For more information, see the "Cluster CAPI Operator" entry in the Cluster Operators reference content. Additional resources Cluster CAPI Operator 13.1.2.2. Cluster API primary resources The Cluster API consists of the following primary resources. For the Technology Preview of this feature, you must create these resources manually in the openshift-cluster-api namespace. Cluster A fundamental unit that represents a cluster that is managed by the Cluster API. Infrastructure cluster A provider-specific resource that defines properties that are shared by all the compute machine sets in the cluster, such as the region and subnets. Machine template A provider-specific template that defines the properties of the machines that a compute machine set creates. Machine set A group of machines. Compute machine sets are to machines as replica sets are to pods. To add machines or scale them down, change the replicas field on the compute machine set custom resource to meet your compute needs. With the Cluster API, a compute machine set references a Cluster object and a provider-specific machine template. Machine A fundamental unit that describes the host for a node. The Cluster API creates machines based on the configuration in the machine template. 13.2. Getting started with the Cluster API Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . For the Cluster API Technology Preview, you must create the primary resources that the Cluster API requires manually. 13.2.1. Creating the Cluster API primary resources To create the Cluster API primary resources, you must obtain the cluster ID value, which you use for the <cluster_name> parameter in the cluster resource manifest. 13.2.1.1. Obtaining the cluster ID value You can find the cluster ID value by using the OpenShift CLI ( oc ). Prerequisites You have deployed an OpenShift Container Platform cluster. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). Procedure Obtain the value of the cluster ID by running the following command: USD oc get infrastructure cluster \ -o jsonpath='{.status.infrastructureName}' You can create the Cluster API primary resources manually by creating YAML manifest files and applying them with the OpenShift CLI ( oc ). 13.2.1.2. Creating the Cluster API cluster resource You can create the cluster resource by creating a YAML manifest file and applying it with the OpenShift CLI ( oc ). Prerequisites You have deployed an OpenShift Container Platform cluster. You have enabled the use of the Cluster API. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have the cluster ID value. Procedure Create a YAML file similar to the following. This procedure uses <cluster_resource_file>.yaml as an example file name. apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: <cluster_name> 1 namespace: openshift-cluster-api spec: infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <infrastructure_kind> 2 name: <cluster_name> namespace: openshift-cluster-api 1 Specify the cluster ID as the name of the cluster. 2 Specify the infrastructure kind for the cluster. The following values are valid: AWSCluster : The cluster is running on Amazon Web Services (AWS). GCPCluster : The cluster is running on Google Cloud Platform (GCP). Cluster cloud provider Value Amazon Web Services (AWS) AWSCluster Google Cloud Platform (GCP) GCPCluster Red Hat OpenStack Platform (RHOSP) OpenStackCluster Create the cluster CR by running the following command: USD oc create -f <cluster_resource_file>.yaml Verification Confirm that the cluster CR exists by running the following command: USD oc get cluster Example output NAME PHASE AGE VERSION <cluster_name> Provisioning 4h6m The cluster resource is ready when the value of PHASE is Provisioned . Additional resources Cluster API configuration 13.2.1.3. Creating a Cluster API infrastructure resource You can create a provider-specific infrastructure resource by creating a YAML manifest file and applying it with the OpenShift CLI ( oc ). Prerequisites You have deployed an OpenShift Container Platform cluster. You have enabled the use of the Cluster API. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have the cluster ID value. You have created and applied the cluster resource. Procedure Create a YAML file similar to the following. This procedure uses <infrastructure_resource_file>.yaml as an example file name. apiVersion: infrastructure.cluster.x-k8s.io/<version> 1 kind: <infrastructure_kind> 2 metadata: name: <cluster_name> 3 namespace: openshift-cluster-api spec: 4 1 The apiVersion varies by platform. For more information, see the sample Cluster API infrastructure resource YAML for your provider. The following values are valid: infrastructure.cluster.x-k8s.io/v1beta1 : The version that Amazon Web Services (AWS), Google Cloud Platform (GCP), and Red Hat OpenStack Platform (RHOSP) clusters use. 2 Specify the infrastructure kind for the cluster. This value must match the value for your platform. The following values are valid: Cluster cloud provider Value Amazon Web Services (AWS) AWSCluster Google Cloud Platform (GCP) GCPCluster Red Hat OpenStack Platform (RHOSP) OpenStackCluster 3 Specify the name of the cluster. 4 Specify the details for your environment. These parameters are provider specific. For more information, see the sample Cluster API infrastructure resource YAML for your provider. Create the infrastructure CR by running the following command: USD oc create -f <infrastructure_resource_file>.yaml Verification Confirm that the infrastructure CR is created by running the following command: USD oc get <infrastructure_kind> where <infrastructure_kind> is the value that corresponds to your platform. Example output NAME CLUSTER READY <cluster_name> <cluster_name> true Note This output might contain additional columns that are specific to your cloud provider. Additional resources Sample YAML for a Cluster API infrastructure resource on Amazon Web Services Sample YAML for a Cluster API infrastructure resource on Google Cloud Platform Sample YAML for a Cluster API infrastructure resource on RHOSP 13.2.1.4. Creating a Cluster API machine template You can create a provider-specific machine template resource by creating a YAML manifest file and applying it with the OpenShift CLI ( oc ). Prerequisites You have deployed an OpenShift Container Platform cluster. You have enabled the use of the Cluster API. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have created and applied the cluster and infrastructure resources. Procedure Create a YAML file similar to the following. This procedure uses <machine_template_resource_file>.yaml as an example file name. apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <machine_template_kind> 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 1 Specify the machine template kind. This value must match the value for your platform. The following values are valid: Cluster cloud provider Value Amazon Web Services (AWS) AWSMachineTemplate Google Cloud Platform (GCP) GCPMachineTemplate Red Hat OpenStack Platform (RHOSP) OpenStackMachineTemplate 2 Specify a name for the machine template. 3 Specify the details for your environment. These parameters are provider specific. For more information, see the sample Cluster API machine template YAML for your provider. Create the machine template CR by running the following command: USD oc create -f <machine_template_resource_file>.yaml Verification Confirm that the machine template CR is created by running the following command: USD oc get <machine_template_kind> where <machine_template_kind> is the value that corresponds to your platform. Example output NAME AGE <template_name> 77m Additional resources Sample YAML for a Cluster API machine template resource on Amazon Web Services Sample YAML for a Cluster API machine template resource on Google Cloud Platform Sample YAML for a Cluster API machine template resource on RHOSP 13.2.1.5. Creating a Cluster API compute machine set You can create compute machine sets that use the Cluster API to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites You have deployed an OpenShift Container Platform cluster. You have enabled the use of the Cluster API. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have created the cluster, infrastructure, and machine template resources. Procedure Create a YAML file similar to the following. This procedure uses <machine_set_resource_file>.yaml as an example file name. apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example template: metadata: labels: test: example spec: 3 # ... 1 Specify a name for the compute machine set. 2 Specify the name of the cluster. 3 Specify the details for your environment. These parameters are provider specific. For more information, see the sample Cluster API compute machine set YAML for your provider. Create the compute machine set CR by running the following command: USD oc create -f <machine_set_resource_file>.yaml Confirm that the compute machine set CR is created by running the following command: USD oc get machineset -n openshift-cluster-api 1 1 Specify the openshift-cluster-api namespace. Example output NAME CLUSTER REPLICAS READY AVAILABLE AGE VERSION <machine_set_name> <cluster_name> 1 1 1 17m When the new compute machine set is available, the REPLICAS and AVAILABLE values match. If the compute machine set is not available, wait a few minutes and run the command again. Verification To verify that the compute machine set is creating machines according to your required configuration, review the lists of machines and nodes in the cluster by running the following commands: View the list of Cluster API machines: USD oc get machine -n openshift-cluster-api 1 1 Specify the openshift-cluster-api namespace. Example output NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_set_name>-<string_id> <cluster_name> <ip_address>.<region>.compute.internal <provider_id> Running 8m23s View the list of nodes: USD oc get node Example output NAME STATUS ROLES AGE VERSION <ip_address_1>.<region>.compute.internal Ready worker 5h14m v1.28.5 <ip_address_2>.<region>.compute.internal Ready master 5h19m v1.28.5 <ip_address_3>.<region>.compute.internal Ready worker 7m v1.28.5 Additional resources Sample YAML for a Cluster API compute machine set resource on Amazon Web Services Sample YAML for a Cluster API compute machine set resource on Google Cloud Platform Sample YAML for a Cluster API compute machine set resource on RHOSP 13.3. Managing machines with the Cluster API Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 13.3.1. Modifying a Cluster API machine template You can update the machine template resource for your cluster by modifying the YAML manifest file and applying it with the OpenShift CLI ( oc ). Prerequisites You have deployed an OpenShift Container Platform cluster that uses the Cluster API. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). Procedure List the machine template resource for your cluster by running the following command: USD oc get <machine_template_kind> 1 1 Specify the value that corresponds to your platform. The following values are valid: Cluster cloud provider Value Amazon Web Services (AWS) AWSMachineTemplate Google Cloud Platform (GCP) GCPMachineTemplate Red Hat OpenStack Platform (RHOSP) OpenStackMachineTemplate Example output NAME AGE <template_name> 77m Write the machine template resource for your cluster to a file that you can edit by running the following command: USD oc get <machine_template_kind> <template_name> -o yaml > <template_name>.yaml where <template_name> is the name of the machine template resource for your cluster. Make a copy of the <template_name>.yaml file with a different name. This procedure uses <modified_template_name>.yaml as an example file name. Use a text editor to make changes to the <modified_template_name>.yaml file that defines the updated machine template resource for your cluster. When editing the machine template resource, observe the following: The parameters in the spec stanza are provider specific. For more information, see the sample Cluster API machine template YAML for your provider. You must use a value for the metadata.name parameter that differs from any existing values. Important For any Cluster API compute machine sets that reference this template, you must update the spec.template.spec.infrastructureRef.name parameter to match the metadata.name value in the new machine template resource. Apply the machine template CR by running the following command: USD oc apply -f <modified_template_name>.yaml 1 1 Use the edited YAML file with a new name. steps For any Cluster API compute machine sets that reference this template, update the spec.template.spec.infrastructureRef.name parameter to match the metadata.name value in the new machine template resource. For more information, see "Modifying a compute machine set by using the CLI." Additional resources Sample YAML for a Cluster API machine template resource on Amazon Web Services Sample YAML for a Cluster API machine template resource on Google Cloud Platform Sample YAML for a Cluster API machine template resource on RHOSP Modifying a compute machine set by using the CLI 13.3.2. Modifying a compute machine set by using the CLI You can modify the configuration of a compute machine set, and then propagate the changes to the machines in your cluster by using the CLI. By updating the compute machine set configuration, you can enable features or change the properties of the machines it creates. When you modify a compute machine set, your changes only apply to compute machines that are created after you save the updated MachineSet custom resource (CR). The changes do not affect existing machines. Note Changes made in the underlying cloud provider are not reflected in the Machine or MachineSet CRs. To adjust instance configuration in cluster-managed infrastructure, use the cluster-side resources. You can replace the existing machines with new ones that reflect the updated configuration by scaling the compute machine set to create twice the number of replicas and then scaling it down to the original number of replicas. If you need to scale a compute machine set without making other changes, you do not need to delete the machines. Note By default, the OpenShift Container Platform router pods are deployed on compute machines. Because the router is required to access some cluster resources, including the web console, do not scale the compute machine set to 0 unless you first relocate the router pods. The output examples in this procedure use the values for an AWS cluster. Prerequisites Your OpenShift Container Platform cluster uses the Cluster API. You are logged in to the cluster as an administrator by using the OpenShift CLI ( oc ). Procedure List the compute machine sets in your cluster by running the following command: USD oc get machinesets.cluster.x-k8s.io -n openshift-cluster-api Example output NAME CLUSTER REPLICAS READY AVAILABLE AGE VERSION <compute_machine_set_name_1> <cluster_name> 1 1 1 26m <compute_machine_set_name_2> <cluster_name> 1 1 1 26m Edit a compute machine set by running the following command: USD oc edit machinesets.cluster.x-k8s.io <machine_set_name> \ -n openshift-cluster-api Note the value of the spec.replicas field, because you need it when scaling the machine set to apply the changes. apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> namespace: openshift-cluster-api spec: replicas: 2 1 # ... 1 The examples in this procedure show a compute machine set that has a replicas value of 2 . Update the compute machine set CR with the configuration options that you want and save your changes. List the machines that are managed by the updated compute machine set by running the following command: USD oc get machines.cluster.x-k8s.io \ -n openshift-cluster-api \ -l cluster.x-k8s.io/set-name=<machine_set_name> Example output for an AWS cluster NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h For each machine that is managed by the updated compute machine set, set the delete annotation by running the following command: USD oc annotate machines.cluster.x-k8s.io/<machine_name_original_1> \ -n openshift-cluster-api \ cluster.x-k8s.io/delete-machine="true" To create replacement machines with the new configuration, scale the compute machine set to twice the number of replicas by running the following command: USD oc scale --replicas=4 \ 1 machinesets.cluster.x-k8s.io <machine_set_name> \ -n openshift-cluster-api 1 The original example value of 2 is doubled to 4 . List the machines that are managed by the updated compute machine set by running the following command: USD oc get machines.cluster.x-k8s.io \ -n openshift-cluster-api \ -l cluster.x-k8s.io/set-name=<machine_set_name> Example output for an AWS cluster NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Provisioned 55s <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Provisioning 55s When the new machines are in the Running phase, you can scale the compute machine set to the original number of replicas. To remove the machines that were created with the old configuration, scale the compute machine set to the original number of replicas by running the following command: USD oc scale --replicas=2 \ 1 machinesets.cluster.x-k8s.io <machine_set_name> \ -n openshift-cluster-api 1 The original example value of 2 . Verification To verify that a machine created by the updated machine set has the correct configuration, examine the relevant fields in the CR for one of the new machines by running the following command: USD oc describe machines.cluster.x-k8s.io <machine_name_updated_1> \ -n openshift-cluster-api To verify that the compute machines without the updated configuration are deleted, list the machines that are managed by the updated compute machine set by running the following command: USD oc get machines.cluster.x-k8s.io \ -n openshift-cluster-api \ cluster.x-k8s.io/set-name=<machine_set_name> Example output while deletion is in progress for an AWS cluster NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m Example output when deletion is complete for an AWS cluster NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m Additional resources Sample YAML for a Cluster API compute machine set resource on Amazon Web Services Sample YAML for a Cluster API compute machine set resource on Google Cloud Platform Sample YAML for a Cluster API compute machine set resource on RHOSP 13.4. Cluster API configuration Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The following example YAML files show how to make the Cluster API primary resources work together and configure settings for the machines that they create that are appropriate for your environment. 13.4.1. Sample YAML for a Cluster API cluster resource The cluster resource defines the name and infrastructure provider for the cluster and is managed by the Cluster API. This resource has the same structure for all providers. apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: <cluster_name> 1 namespace: openshift-cluster-api spec: infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <infrastructure_kind> 2 name: <cluster_name> namespace: openshift-cluster-api 1 Specify the name of the cluster. 2 Specify the infrastructure kind for the cluster. The following values are valid: Cluster cloud provider Value Amazon Web Services AWSCluster GCP GCPCluster RHOSP OpenStackCluster 13.4.2. Provider-specific configuration options The remaining Cluster API resources are provider-specific. For provider-specific configuration options for your cluster, see the following resources: Cluster API configuration options for Amazon Web Services Cluster API configuration options for Google Cloud Platform Cluster API configuration options for RHOSP 13.5. Configuration options for Cluster API machines 13.5.1. Cluster API configuration options for Amazon Web Services Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can change the configuration of your Amazon Web Services (AWS) Cluster API machines by updating values in the Cluster API custom resource manifests. 13.5.1.1. Sample YAML for configuring Amazon Web Services clusters The following example YAML files show configurations for an Amazon Web Services cluster. 13.5.1.1.1. Sample YAML for a Cluster API infrastructure resource on Amazon Web Services The infrastructure resource is provider-specific and defines properties that are shared by all the compute machine sets in the cluster, such as the region and subnets. The compute machine set references this resource when creating machines. apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AWSCluster 1 metadata: name: <cluster_name> 2 namespace: openshift-cluster-api spec: controlPlaneEndpoint: 3 host: <control_plane_endpoint_address> port: 6443 region: <region> 4 1 Specify the infrastructure kind for the cluster. This value must match the value for your platform. 2 Specify the cluster ID as the name of the cluster. 3 Specify the address of the control plane endpoint and the port to use to access it. 4 Specify the AWS region. 13.5.1.1.2. Sample YAML for a Cluster API machine template resource on Amazon Web Services The machine template resource is provider-specific and defines the basic properties of the machines that a compute machine set creates. The compute machine set references this template when creating machines. apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AWSMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 uncompressedUserData: true iamInstanceProfile: # ... instanceType: m5.large cloudInit: insecureSkipSecretsManager: true ami: id: # ... subnet: filters: - name: tag:Name values: - # ... additionalSecurityGroups: - filters: - name: tag:Name values: - # ... 1 Specify the machine template kind. This value must match the value for your platform. 2 Specify a name for the machine template. 3 Specify the details for your environment. The values here are examples. 13.5.1.1.3. Sample YAML for a Cluster API compute machine set resource on Amazon Web Services The compute machine set resource defines additional properties of the machines that it creates. The compute machine set also references the infrastructure resource and machine template when creating machines. apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example template: metadata: labels: test: example spec: bootstrap: dataSecretName: worker-user-data 3 clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AWSMachineTemplate 4 name: <template_name> 5 1 Specify a name for the compute machine set. 2 Specify the cluster ID as the name of the cluster. 3 For the Cluster API Technology Preview, the Operator can use the worker user data secret from the openshift-machine-api namespace. 4 Specify the machine template kind. This value must match the value for your platform. 5 Specify the machine template name. 13.5.2. Cluster API configuration options for Google Cloud Platform Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can change the configuration of your Google Cloud Platform (GCP) Cluster API machines by updating values in the Cluster API custom resource manifests. 13.5.2.1. Sample YAML for configuring Google Cloud Platform clusters The following example YAML files show configurations for a Google Cloud Platform cluster. 13.5.2.1.1. Sample YAML for a Cluster API infrastructure resource on Google Cloud Platform The infrastructure resource is provider-specific and defines properties that are shared by all the compute machine sets in the cluster, such as the region and subnets. The compute machine set references this resource when creating machines. apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPCluster 1 metadata: name: <cluster_name> 2 spec: controlPlaneEndpoint: 3 host: <control_plane_endpoint_address> port: 6443 network: name: <cluster_name>-network project: <project> 4 region: <region> 5 1 Specify the infrastructure kind for the cluster. This value must match the value for your platform. 2 Specify the cluster ID as the name of the cluster. 3 Specify the IP address of the control plane endpoint and the port used to access it. 4 Specify the GCP project name. 5 Specify the GCP region. 13.5.2.1.2. Sample YAML for a Cluster API machine template resource on Google Cloud Platform The machine template resource is provider-specific and defines the basic properties of the machines that a compute machine set creates. The compute machine set references this template when creating machines. apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 rootDeviceType: pd-ssd rootDeviceSize: 128 instanceType: n1-standard-4 image: projects/rhcos-cloud/global/images/rhcos-411-85-202203181601-0-gcp-x86-64 subnet: <cluster_name>-worker-subnet serviceAccounts: email: <service_account_email_address> scopes: - https://www.googleapis.com/auth/cloud-platform additionalLabels: kubernetes-io-cluster-<cluster_name>: owned additionalNetworkTags: - <cluster_name>-worker ipForwarding: Disabled 1 Specify the machine template kind. This value must match the value for your platform. 2 Specify a name for the machine template. 3 Specify the details for your environment. The values here are examples. 13.5.2.1.3. Sample YAML for a Cluster API compute machine set resource on Google Cloud Platform The compute machine set resource defines additional properties of the machines that it creates. The compute machine set also references the infrastructure resource and machine template when creating machines. apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example template: metadata: labels: test: example spec: bootstrap: dataSecretName: worker-user-data 3 clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPMachineTemplate 4 name: <template_name> 5 failureDomain: <failure_domain> 6 1 Specify a name for the compute machine set. 2 Specify the cluster ID as the name of the cluster. 3 For the Cluster API Technology Preview, the Operator can use the worker user data secret from the openshift-machine-api namespace. 4 Specify the machine template kind. This value must match the value for your platform. 5 Specify the machine template name. 6 Specify the failure domain within the GCP region. 13.5.3. Cluster API configuration options for Red Hat OpenStack Platform Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can change the configuration of your Red Hat OpenStack Platform (RHOSP) Cluster API machines by updating values in the Cluster API custom resource manifests. 13.5.3.1. Sample YAML for configuring RHOSP clusters The following example YAML files show configurations for a RHOSP cluster. 13.5.3.1.1. Sample YAML for a Cluster API infrastructure cluster resource on RHOSP The infrastructure cluster resource is provider-specific and defines properties that are shared by all the compute machine sets in the cluster, such as the region and subnets. The compute machine set references this resource when creating machines. Important Only the parameters in the following example are validated to be compatible with OpenShift Container Platform. Other parameters, such as those documented in the upstream Cluster API Provider OpenStack book, might cause undesired behavior. apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: OpenStackCluster 1 metadata: name: <cluster_name> 2 namespace: openshift-cluster-api labels: cluster.x-k8s.io/cluster-name: <cluster_name> spec: controlPlaneEndpoint: <control_plane_endpoint_address> 3 disableAPIServerFloatingIP: true tags: - openshiftClusterID=<cluster_name> network: id: <api_service_network_id> 4 externalNetwork: id: <floating_network_id> 5 identityRef: cloudName: openstack name: openstack-cloud-credentials 1 Specify the infrastructure kind for the cluster. This value must match the value for your platform. 2 Specify the cluster ID as the name of the cluster. 3 Specify the IP address of the control plane endpoint and the port used to access it. 4 Specify the UUID of the default network to use for machines that do not specify ports. Note This feature might be removed in a future release. To prevent issues due to the removal of this feature, specify at least one port in the machine specification instead of relying solely on this feature. 5 Specify the UUID of an external network. You can specify the network used to assign external floating IP addresses or the network used for egress for this field. Note The infrastructure cluster resource requires this field but does not currently use this value. This requirement is planned to be removed in a future release. 13.5.3.1.2. Sample YAML for a Cluster API machine template resource on RHOSP The machine template resource is provider-specific and defines the basic properties of the machines that a compute machine set creates. The compute machine set references this template when creating machines. apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: OpenStackMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 flavor: <openstack_node_machine_flavor> 4 image: filter: name: <openstack_image> 5 1 Specify the machine template kind. This value must match the value for your platform. 2 Specify a name for the machine template. 3 Specify the details for your environment. The values here are examples. 4 Specify the RHOSP flavor to use. For more information, see Creating flavors for launching instances . 5 Specify the image to use. 13.5.3.1.3. Sample YAML for a Cluster API compute machine set resource on RHOSP The compute machine set resource defines additional properties of the machines that it creates. The compute machine set also references the infrastructure resource and machine template when creating machines. apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> template: metadata: labels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> node-role.kubernetes.io/<role>: "" spec: bootstrap: dataSecretName: worker-user-data 3 clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: OpenStackMachineTemplate 4 name: <template_name> 5 failureDomain: <nova_availability_zone> 6 1 Specify a name for the compute machine set. 2 Specify the cluster ID as the name of the cluster. 3 For the Cluster API Technology Preview, the Operator can use the worker user data secret from the openshift-machine-api namespace. 4 Specify the machine template kind. This value must match the value for your platform. 5 Specify the machine template name. 6 Optional: Specify the name of the Nova availability zone for the machine set to create machines in. If you do not specify a value, machines are not restricted to a specific availability zone. 13.6. Troubleshooting clusters that use the Cluster API Important Managing machines with the Cluster API is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Use the information in this section to understand and recover from issues you might encounter. Generally, troubleshooting steps for problems with the Cluster API are similar to those steps for problems with the Machine API. The Cluster CAPI Operator and its operands are provisioned in the openshift-cluster-api namespace, whereas the Machine API uses the openshift-machine-api namespace. When using oc commands that reference a namespace, be sure to reference the correct one. 13.6.1. Referencing the intended objects when using the CLI For clusters that use the Cluster API, OpenShift CLI ( oc ) commands prioritize Cluster API objects over Machine API objects. This behavior impacts any oc command that acts upon any object that is represented in both the Cluster API and the Machine API. This explanation uses the oc delete machine command, which deletes a machine, as an example. Cause When you run an oc command, oc communicates with the Kube API server to determine which objects to act upon. The Kube API server uses the first installed custom resource definition (CRD) it encounters alphabetically when an oc command is run. CRDs for Cluster API objects are in the cluster.x-k8s.io group, while CRDs for Machine API objects are in the machine.openshift.io group. Because the letter c precedes the letter m alphabetically, the Kube API server matches on the Cluster API object CRD. As a result, the oc command acts upon Cluster API objects. Consequences Due to this behavior, the following unintended outcomes can occur on a cluster that uses the Cluster API: For namespaces that contain both types of objects, commands such as oc get machine return only Cluster API objects. For namespaces that contain only Machine API objects, commands such as oc get machine return no results. Workaround You can ensure that oc commands act on the type of objects you intend by using the corresponding fully qualified name. Prerequisites You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). Procedure To delete a Machine API machine, use the fully qualified name machine.machine.openshift.io when running the oc delete machine command: USD oc delete machine.machine.openshift.io <machine_name> To delete a Cluster API machine, use the fully qualified name machine.cluster.x-k8s.io when running the oc delete machine command: USD oc delete machine.cluster.x-k8s.io <machine_name> | [
"oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}'",
"apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: <cluster_name> 1 namespace: openshift-cluster-api spec: infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <infrastructure_kind> 2 name: <cluster_name> namespace: openshift-cluster-api",
"oc create -f <cluster_resource_file>.yaml",
"oc get cluster",
"NAME PHASE AGE VERSION <cluster_name> Provisioning 4h6m",
"apiVersion: infrastructure.cluster.x-k8s.io/<version> 1 kind: <infrastructure_kind> 2 metadata: name: <cluster_name> 3 namespace: openshift-cluster-api spec: 4",
"oc create -f <infrastructure_resource_file>.yaml",
"oc get <infrastructure_kind>",
"NAME CLUSTER READY <cluster_name> <cluster_name> true",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <machine_template_kind> 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3",
"oc create -f <machine_template_resource_file>.yaml",
"oc get <machine_template_kind>",
"NAME AGE <template_name> 77m",
"apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example template: metadata: labels: test: example spec: 3",
"oc create -f <machine_set_resource_file>.yaml",
"oc get machineset -n openshift-cluster-api 1",
"NAME CLUSTER REPLICAS READY AVAILABLE AGE VERSION <machine_set_name> <cluster_name> 1 1 1 17m",
"oc get machine -n openshift-cluster-api 1",
"NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_set_name>-<string_id> <cluster_name> <ip_address>.<region>.compute.internal <provider_id> Running 8m23s",
"oc get node",
"NAME STATUS ROLES AGE VERSION <ip_address_1>.<region>.compute.internal Ready worker 5h14m v1.28.5 <ip_address_2>.<region>.compute.internal Ready master 5h19m v1.28.5 <ip_address_3>.<region>.compute.internal Ready worker 7m v1.28.5",
"oc get <machine_template_kind> 1",
"NAME AGE <template_name> 77m",
"oc get <machine_template_kind> <template_name> -o yaml > <template_name>.yaml",
"oc apply -f <modified_template_name>.yaml 1",
"oc get machinesets.cluster.x-k8s.io -n openshift-cluster-api",
"NAME CLUSTER REPLICAS READY AVAILABLE AGE VERSION <compute_machine_set_name_1> <cluster_name> 1 1 1 26m <compute_machine_set_name_2> <cluster_name> 1 1 1 26m",
"oc edit machinesets.cluster.x-k8s.io <machine_set_name> -n openshift-cluster-api",
"apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> namespace: openshift-cluster-api spec: replicas: 2 1",
"oc get machines.cluster.x-k8s.io -n openshift-cluster-api -l cluster.x-k8s.io/set-name=<machine_set_name>",
"NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h",
"oc annotate machines.cluster.x-k8s.io/<machine_name_original_1> -n openshift-cluster-api cluster.x-k8s.io/delete-machine=\"true\"",
"oc scale --replicas=4 \\ 1 machinesets.cluster.x-k8s.io <machine_set_name> -n openshift-cluster-api",
"oc get machines.cluster.x-k8s.io -n openshift-cluster-api -l cluster.x-k8s.io/set-name=<machine_set_name>",
"NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Provisioned 55s <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Provisioning 55s",
"oc scale --replicas=2 \\ 1 machinesets.cluster.x-k8s.io <machine_set_name> -n openshift-cluster-api",
"oc describe machines.cluster.x-k8s.io <machine_name_updated_1> -n openshift-cluster-api",
"oc get machines.cluster.x-k8s.io -n openshift-cluster-api cluster.x-k8s.io/set-name=<machine_set_name>",
"NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m",
"NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m",
"apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: <cluster_name> 1 namespace: openshift-cluster-api spec: infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <infrastructure_kind> 2 name: <cluster_name> namespace: openshift-cluster-api",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AWSCluster 1 metadata: name: <cluster_name> 2 namespace: openshift-cluster-api spec: controlPlaneEndpoint: 3 host: <control_plane_endpoint_address> port: 6443 region: <region> 4",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AWSMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 uncompressedUserData: true iamInstanceProfile: # instanceType: m5.large cloudInit: insecureSkipSecretsManager: true ami: id: # subnet: filters: - name: tag:Name values: - # additionalSecurityGroups: - filters: - name: tag:Name values: - #",
"apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example template: metadata: labels: test: example spec: bootstrap: dataSecretName: worker-user-data 3 clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AWSMachineTemplate 4 name: <template_name> 5",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPCluster 1 metadata: name: <cluster_name> 2 spec: controlPlaneEndpoint: 3 host: <control_plane_endpoint_address> port: 6443 network: name: <cluster_name>-network project: <project> 4 region: <region> 5",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 rootDeviceType: pd-ssd rootDeviceSize: 128 instanceType: n1-standard-4 image: projects/rhcos-cloud/global/images/rhcos-411-85-202203181601-0-gcp-x86-64 subnet: <cluster_name>-worker-subnet serviceAccounts: email: <service_account_email_address> scopes: - https://www.googleapis.com/auth/cloud-platform additionalLabels: kubernetes-io-cluster-<cluster_name>: owned additionalNetworkTags: - <cluster_name>-worker ipForwarding: Disabled",
"apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example template: metadata: labels: test: example spec: bootstrap: dataSecretName: worker-user-data 3 clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPMachineTemplate 4 name: <template_name> 5 failureDomain: <failure_domain> 6",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: OpenStackCluster 1 metadata: name: <cluster_name> 2 namespace: openshift-cluster-api labels: cluster.x-k8s.io/cluster-name: <cluster_name> spec: controlPlaneEndpoint: <control_plane_endpoint_address> 3 disableAPIServerFloatingIP: true tags: - openshiftClusterID=<cluster_name> network: id: <api_service_network_id> 4 externalNetwork: id: <floating_network_id> 5 identityRef: cloudName: openstack name: openstack-cloud-credentials",
"apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: OpenStackMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 flavor: <openstack_node_machine_flavor> 4 image: filter: name: <openstack_image> 5",
"apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> template: metadata: labels: test: example cluster.x-k8s.io/cluster-name: <cluster_name> cluster.x-k8s.io/set-name: <machine_set_name> node-role.kubernetes.io/<role>: \"\" spec: bootstrap: dataSecretName: worker-user-data 3 clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: OpenStackMachineTemplate 4 name: <template_name> 5 failureDomain: <nova_availability_zone> 6",
"oc delete machine.machine.openshift.io <machine_name>",
"oc delete machine.cluster.x-k8s.io <machine_name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/machine_management/managing-machines-with-the-cluster-api |
Chapter 117. KafkaMirrorMakerConsumerSpec schema reference | Chapter 117. KafkaMirrorMakerConsumerSpec schema reference Used in: KafkaMirrorMakerSpec Full list of KafkaMirrorMakerConsumerSpec schema properties Configures a MirrorMaker consumer. 117.1. numStreams Use the consumer.numStreams property to configure the number of streams for the consumer. You can increase the throughput in mirroring topics by increasing the number of consumer threads. Consumer threads belong to the consumer group specified for Kafka MirrorMaker. Topic partitions are assigned across the consumer threads, which consume messages in parallel. 117.2. offsetCommitInterval Use the consumer.offsetCommitInterval property to configure an offset auto-commit interval for the consumer. You can specify the regular time interval at which an offset is committed after Kafka MirrorMaker has consumed data from the source Kafka cluster. The time interval is set in milliseconds, with a default value of 60,000. 117.3. config Use the consumer.config properties to configure Kafka options for the consumer as keys. The values can be one of the following JSON types: String Number Boolean Exceptions You can specify and configure the options listed in the Apache Kafka configuration documentation for consumers . However, AMQ Streams takes care of configuring and managing options related to the following, which cannot be changed: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Consumer group identifier Interceptors Properties with the following prefixes cannot be set: bootstrap.servers group.id interceptor.classes sasl. security. ssl. If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to MirrorMaker, including the following exceptions to the options configured by AMQ Streams: Any ssl configuration for supported TLS versions and cipher suites Important The Cluster Operator does not validate keys or values in the config object provided. If an invalid configuration is provided, the MirrorMaker cluster might not start or might become unstable. In this case, fix the configuration so that the Cluster Operator can roll out the new configuration to all MirrorMaker nodes. 117.4. groupId Use the consumer.groupId property to configure a consumer group identifier for the consumer. Kafka MirrorMaker uses a Kafka consumer to consume messages, behaving like any other Kafka consumer client. Messages consumed from the source Kafka cluster are mirrored to a target Kafka cluster. A group identifier is required, as the consumer needs to be part of a consumer group for the assignment of partitions. 117.5. KafkaMirrorMakerConsumerSpec schema properties Property Description numStreams Specifies the number of consumer stream threads to create. integer offsetCommitInterval Specifies the offset auto-commit interval in ms. Default value is 60000. integer bootstrapServers A list of host:port pairs for establishing the initial connection to the Kafka cluster. string groupId A unique string that identifies the consumer group this consumer belongs to. string authentication Authentication configuration for connecting to the cluster. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-256, scram-sha-512, plain, oauth]. KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth config The MirrorMaker consumer config. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, group.id, sasl., security., interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). map tls TLS configuration for connecting MirrorMaker to the cluster. ClientTls | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-kafkamirrormakerconsumerspec-reference |
4.30. WTI Power Switch | 4.30. WTI Power Switch Table 4.31, "WTI Power Switch" lists the fence device parameters used by fence_wti , the fence agent for the WTI network power switch. Table 4.31. WTI Power Switch luci Field cluster.conf Attribute Description Name name A name for the WTI power switch connected to the cluster. IP Address or Hostname ipaddr The IP or address or host name assigned to the device. IP Port (optional) ipport The TCP port to use to connect to the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Force command prompt cmd_prompt The command prompt to use. The default value is ['RSM>', '>MPC', 'IPS>', 'TPS>', 'NBB>', 'NPS>', 'VMR>'] Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Use SSH secure Indicates that system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. SSH Options ssh_options SSH options to use. The default value is -1 -c blowfish . Path to SSH Identity File identity_file The identity file for SSH. Port port Physical plug number or name of virtual machine. Figure 4.23, "WTI Fencing" shows the configuration screen for adding a WTI fence device Figure 4.23. WTI Fencing The following command creates a fence device instance for a WTI fence device: The following is the cluster.conf entry for the fence_wti device: | [
"ccs -f cluster.conf --addfencedev wtipwrsw1 agent=fence_wti cmd_prompt=VMR> login=root passwd=password123 power_wait=60",
"<fencedevices> <fencedevice agent=\"fence_wti\" cmd_prompt=\"VMR>\" ipaddr=\"192.168.0.1\" login=\"root\" name=\"wtipwrsw1\" passwd=\"password123\" power_wait=\"60\"/> </fencedevices>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-wti-ca |
Chapter 9. Disabling Windows container workloads | Chapter 9. Disabling Windows container workloads You can disable the capability to run Windows container workloads by uninstalling the Windows Machine Config Operator (WMCO) and deleting the namespace that was added by default when you installed the WMCO. 9.1. Uninstalling the Windows Machine Config Operator You can uninstall the Windows Machine Config Operator (WMCO) from your cluster. Prerequisites Delete the Windows Machine objects hosting your Windows workloads. Procedure From the Operators OperatorHub page, use the Filter by keyword box to search for Red Hat Windows Machine Config Operator . Click the Red Hat Windows Machine Config Operator tile. The Operator tile indicates it is installed. In the Windows Machine Config Operator descriptor page, click Uninstall . 9.2. Deleting the Windows Machine Config Operator namespace You can delete the namespace that was generated for the Windows Machine Config Operator (WMCO) by default. Prerequisites The WMCO is removed from your cluster. Procedure Remove all Windows workloads that were created in the openshift-windows-machine-config-operator namespace: USD oc delete --all pods --namespace=openshift-windows-machine-config-operator Verify that all pods in the openshift-windows-machine-config-operator namespace are deleted or are reporting a terminating state: USD oc get pods --namespace openshift-windows-machine-config-operator Delete the openshift-windows-machine-config-operator namespace: USD oc delete namespace openshift-windows-machine-config-operator Additional resources Deleting Operators from a cluster Removing Windows nodes . | [
"oc delete --all pods --namespace=openshift-windows-machine-config-operator",
"oc get pods --namespace openshift-windows-machine-config-operator",
"oc delete namespace openshift-windows-machine-config-operator"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/windows_container_support_for_openshift/disabling-windows-container-workloads |
7.9. Additional Resources | 7.9. Additional Resources A complete list of SSSD-related man pages is available in the SEE ALSO section in the sssd (8) man page. Troubleshooting advice: Section A.1, "Troubleshooting SSSD" . A procedure for configuring SSSD to process password expiration warnings sent by the server and display them to users on the local system: Setting Password Expiry in Red Hat Knowledgebase An SSSD client can automatically create a GID for every user retrieved from an LDAP server, and at the same time ensure that the GID matches the user's UID unless the GID number is already taken. To see how automatic creation of GIDs can be set up on an SSSD client which is directly integrated into Active Directory, see the corresponding section in the Windows Integration Guide . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/sssd-manpages |
Chapter 5. Adding TLS Certificates to the Red Hat Quay Container | Chapter 5. Adding TLS Certificates to the Red Hat Quay Container To add custom TLS certificates to Red Hat Quay, create a new directory named extra_ca_certs/ beneath the Red Hat Quay config directory. Copy any required site-specific TLS certificates to this new directory. 5.1. Add TLS certificates to Red Hat Quay View certificate to be added to the container Create certs directory and copy certificate there Obtain the Quay container's CONTAINER ID with podman ps : Restart the container with that ID: Examine the certificate copied into the container namespace: 5.2. Adding custom SSL/TLS certificates when Red Hat Quay is deployed on Kubernetes When deployed on Kubernetes, Red Hat Quay mounts in a secret as a volume to store config assets. Currently, this breaks the upload certificate function of the superuser panel. As a temporary workaround, base64 encoded certificates can be added to the secret after Red Hat Quay has been deployed. Use the following procedure to add custom SSL/TLS certificates when Red Hat Quay is deployed on Kubernetes. Prerequisites Red Hat Quay has been deployed. You have a custom ca.crt file. Procedure Base64 encode the contents of an SSL/TLS certificate by entering the following command: USD cat ca.crt | base64 -w 0 Example output ...c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= Enter the following kubectl command to edit the quay-enterprise-config-secret file: USD kubectl --namespace quay-enterprise edit secret/quay-enterprise-config-secret Add an entry for the certificate and paste the full base64 encoded stringer under the entry. For example: custom-cert.crt: c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= Use the kubectl delete command to remove all Red Hat Quay pods. For example: USD kubectl delete pod quay-operator.v3.7.1-6f9d859bd-p5ftc quayregistry-clair-postgres-7487f5bd86-xnxpr quayregistry-quay-app-upgrade-xq2v6 quayregistry-quay-config-editor-6dfdcfc44f-hlvwm quayregistry-quay-database-859d5445ff-cqthr quayregistry-quay-redis-84f888776f-hhgms Afterwards, the Red Hat Quay deployment automatically schedules replace pods with the new certificate data. | [
"cat storage.crt -----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV [...] -----END CERTIFICATE-----",
"mkdir -p quay/config/extra_ca_certs cp storage.crt quay/config/extra_ca_certs/ tree quay/config/ ├── config.yaml ├── extra_ca_certs │ ├── storage.crt",
"sudo podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS 5a3e82c4a75f <registry>/<repo>/quay:v3.9.10 \"/sbin/my_init\" 24 hours ago Up 18 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 443/tcp grave_keller",
"sudo podman restart 5a3e82c4a75f",
"sudo podman exec -it 5a3e82c4a75f cat /etc/ssl/certs/storage.pem -----BEGIN CERTIFICATE----- MIIDTTCCAjWgAwIBAgIJAMVr9ngjJhzbMA0GCSqGSIb3DQEBCwUAMD0xCzAJBgNV",
"cat ca.crt | base64 -w 0",
"...c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=",
"kubectl --namespace quay-enterprise edit secret/quay-enterprise-config-secret",
"custom-cert.crt: c1psWGpqeGlPQmNEWkJPMjJ5d0pDemVnR2QNCnRsbW9JdEF4YnFSdVd3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=",
"kubectl delete pod quay-operator.v3.7.1-6f9d859bd-p5ftc quayregistry-clair-postgres-7487f5bd86-xnxpr quayregistry-quay-app-upgrade-xq2v6 quayregistry-quay-config-editor-6dfdcfc44f-hlvwm quayregistry-quay-database-859d5445ff-cqthr quayregistry-quay-redis-84f888776f-hhgms"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/manage_red_hat_quay/config-custom-ssl-certs-manual |
Chapter 101. MongoDB | Chapter 101. MongoDB Both producer and consumer are supported According to Wikipedia: "NoSQL is a movement promoting a loosely defined class of non-relational data stores that break with a long history of relational databases and ACID guarantees." NoSQL solutions have grown in popularity in the last few years, and major extremely-used sites and services such as Facebook, LinkedIn, Twitter, etc. are known to use them extensively to achieve scalability and agility. Basically, NoSQL solutions differ from traditional RDBMS (Relational Database Management Systems) in that they don't use SQL as their query language and generally don't offer ACID-like transactional behaviour nor relational data. Instead, they are designed around the concept of flexible data structures and schemas (meaning that the traditional concept of a database table with a fixed schema is dropped), extreme scalability on commodity hardware and blazing-fast processing. MongoDB is a very popular NoSQL solution and the camel-mongodb component integrates Camel with MongoDB allowing you to interact with MongoDB collections both as a producer (performing operations on the collection) and as a consumer (consuming documents from a MongoDB collection). MongoDB revolves around the concepts of documents (not as is office documents, but rather hierarchical data defined in JSON/BSON) and collections. This component page will assume you are familiar with them. Otherwise, visit http://www.mongodb.org/ . Note The MongoDB Camel component uses Mongo Java Driver 4.x. 101.1. Dependencies When using mongodb with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mongodb-starter</artifactId> </dependency> 101.2. URI format 101.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 101.3.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 101.3.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 101.4. Component Options The MongoDB component supports 4 options, which are listed below. Name Description Default Type mongoConnection (common) Autowired Shared client used for connection. All endpoints generated from the component will share this connection client. MongoClient bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 101.5. Endpoint Options The MongoDB endpoint is configured using URI syntax: with the following path and query parameters: 101.5.1. Path Parameters (1 parameters) Name Description Default Type connectionBean (common) Required Sets the connection bean reference used to lookup a client for connecting to a database. String 101.5.2. Query Parameters (27 parameters) Name Description Default Type collection (common) Sets the name of the MongoDB collection to bind to this endpoint. String collectionIndex (common) Sets the collection index (JSON FORMAT : \\{ field1 : order1, field2 : order2}). String createCollection (common) Create collection during initialisation if it doesn't exist. Default is true. true boolean database (common) Sets the name of the MongoDB database to target. String hosts (common) Host address of mongodb server in host:port format. It's possible also use more than one address, as comma separated list of hosts: host1:port1,host2:port2. If hosts parameter is specified, provided connectionBean is ignored. String mongoConnection (common) Sets the connection bean used as a client for connecting to a database. MongoClient operation (common) Sets the operation this endpoint will execute against MongoDB. Enum values: findById findOneByQuery findAll findDistinct insert save update remove bulkWrite aggregate getDbStats getColStats count command MongoDbOperation outputType (common) Convert the output of the producer to the selected type : DocumentList Document or MongoIterable. DocumentList or MongoIterable applies to findAll and aggregate. Document applies to all other operations. Enum values: DocumentList Document MongoIterable MongoDbOutputType bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean consumerType (consumer) Consumer type. String exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean cursorRegenerationDelay (advanced) MongoDB tailable cursors will block until new data arrives. If no new data is inserted, after some time the cursor will be automatically freed and closed by the MongoDB server. The client is expected to regenerate the cursor if needed. This value specifies the time to wait before attempting to fetch a new cursor, and if the attempt fails, how long before the attempt is made. Default value is 1000ms. 1000 long dynamicity (advanced) Sets whether this endpoint will attempt to dynamically resolve the target database and collection from the incoming Exchange properties. Can be used to override at runtime the database and collection specified on the otherwise static endpoint URI. It is disabled by default to boost performance. Enabling it will take a minimal performance hit. false boolean readPreference (advanced) Configure how MongoDB clients route read operations to the members of a replica set. Possible values are PRIMARY, PRIMARY_PREFERRED, SECONDARY, SECONDARY_PREFERRED or NEAREST. Enum values: PRIMARY PRIMARY_PREFERRED SECONDARY SECONDARY_PREFERRED NEAREST PRIMARY String writeConcern (advanced) Configure the connection bean with the level of acknowledgment requested from MongoDB for write operations to a standalone mongod, replicaset or cluster. Possible values are ACKNOWLEDGED, W1, W2, W3, UNACKNOWLEDGED, JOURNALED or MAJORITY. Enum values: ACKNOWLEDGED W1 W2 W3 UNACKNOWLEDGED JOURNALED MAJORITY ACKNOWLEDGED String writeResultAsHeader (advanced) In write operations, it determines whether instead of returning WriteResult as the body of the OUT message, we transfer the IN message to the OUT and attach the WriteResult as a header. false boolean streamFilter (changeStream) Filter condition for change streams consumer. String password (security) User password for mongodb connection. String username (security) Username for mongodb connection. String persistentId (tail) One tail tracking collection can host many trackers for several tailable consumers. To keep them separate, each tracker should have its own unique persistentId. String persistentTailTracking (tail) Enable persistent tail tracking, which is a mechanism to keep track of the last consumed message across system restarts. The time the system is up, the endpoint will recover the cursor from the point where it last stopped slurping records. false boolean tailTrackCollection (tail) Collection where tail tracking information will be persisted. If not specified, MongoDbTailTrackingConfig#DEFAULT_COLLECTION will be used by default. String tailTrackDb (tail) Indicates what database the tail tracking mechanism will persist to. If not specified, the current database will be picked by default. Dynamicity will not be taken into account even if enabled, i.e. the tail tracking database will not vary past endpoint initialisation. String tailTrackField (tail) Field where the last tracked value will be placed. If not specified, MongoDbTailTrackingConfig#DEFAULT_FIELD will be used by default. String tailTrackIncreasingField (tail) Correlation field in the incoming record which is of increasing nature and will be used to position the tailing cursor every time it is generated. The cursor will be (re)created with a query of type: tailTrackIncreasingField greater than lastValue (possibly recovered from persistent tail tracking). Can be of type Integer, Date, String, etc. NOTE: No support for dot notation at the current time, so the field should be at the top level of the document. String 101.6. Configuration of database in Spring XML The following Spring XML creates a bean defining the connection to a MongoDB instance. Since mongo java driver 3, the WriteConcern and readPreference options are not dynamically modifiable. They are defined in the mongoClient object <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xmlns:mongo="http://www.springframework.org/schema/data/mongo" xsi:schemaLocation="http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/data/mongo http://www.springframework.org/schema/data/mongo/spring-mongo.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <mongo:mongo-client id="mongoBean" host="USD{mongo.url}" port="USD{mongo.port}" credentials="USD{mongo.user}:USD{mongo.pass}@USD{mongo.dbname}"> <mongo:client-options write-concern="NORMAL" /> </mongo:mongo-client> </beans> 101.7. Sample route The following route defined in Spring XML executes the operation getDbStats on a collection. Get DB stats for specified collection <route> <from uri="direct:start" /> <!-- using bean 'mongoBean' defined above --> <to uri="mongodb:mongoBean?database=USD{mongodb.database}&collection=USD{mongodb.collection}&operation=getDbStats" /> <to uri="direct:result" /> </route> 101.8. MongoDB operations - producer endpoints 101.8.1. Query operations 101.8.1.1. findById This operation retrieves only one element from the collection whose _id field matches the content of the IN message body. The incoming object can be anything that has an equivalent to a Bson type. See http://bsonspec.org/spec.html and http://www.mongodb.org/display/DOCS/Java+Types . from("direct:findById") .to("mongodb:myDb?database=flights&collection=tickets&operation=findById") .to("mock:resultFindById"); Please, note that the default _id is treated by Mongo as and ObjectId type, so you may need to convert it properly. from("direct:findById") .convertBodyTo(ObjectId.class) .to("mongodb:myDb?database=flights&collection=tickets&operation=findById") .to("mock:resultFindById"); Note Supports optional parameters This operation supports projection operators. See Specifying a fields filter (projection) . 101.8.1.2. findOneByQuery Retrieve the first element from a collection matching a MongoDB query selector. If the CamelMongoDbCriteria header is set, then its value is used as the query selector . If the CamelMongoDbCriteria header is null , then the IN message body is used as the query selector. In both cases, the query selector should be of type Bson or convertible to Bson (for instance, a JSON string or HashMap ). See Type conversions for more info. Create query selectors using the Filters provided by the MongoDB Driver. 101.8.1.3. Example without a query selector (returns the first document in a collection) from("direct:findOneByQuery") .to("mongodb:myDb?database=flights&collection=tickets&operation=findOneByQuery") .to("mock:resultFindOneByQuery"); 101.8.1.4. Example with a query selector (returns the first matching document in a collection): from("direct:findOneByQuery") .setHeader(MongoDbConstants.CRITERIA, constant(Filters.eq("name", "Raul Kripalani"))) .to("mongodb:myDb?database=flights&collection=tickets&operation=findOneByQuery") .to("mock:resultFindOneByQuery"); Note Supports optional parameters This operation supports projection operators and sort clauses. See Specifying a fields filter (projection) , Specifying a sort clause. 101.8.1.5. findAll The findAll operation returns all documents matching a query, or none at all, in which case all documents contained in the collection are returned. The query object is extracted CamelMongoDbCriteria header . if the CamelMongoDbCriteria header is null the query object is extracted message body, i.e. it should be of type Bson or convertible to Bson . It can be a JSON String or a Hashmap. See Type conversions for more info. 101.8.1.5.1. Example without a query selector (returns all documents in a collection) from("direct:findAll") .to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") .to("mock:resultFindAll"); 101.8.1.5.2. Example with a query selector (returns all matching documents in a collection) from("direct:findAll") .setHeader(MongoDbConstants.CRITERIA, Filters.eq("name", "Raul Kripalani")) .to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") .to("mock:resultFindAll"); Paging and efficient retrieval is supported via the following headers: Header key Quick constant Description (extracted from MongoDB API doc) Expected type CamelMongoDbNumToSkip MongoDbConstants.NUM_TO_SKIP Discards a given number of elements at the beginning of the cursor. int/Integer CamelMongoDbLimit MongoDbConstants.LIMIT Limits the number of elements returned. int/Integer CamelMongoDbBatchSize MongoDbConstants.BATCH_SIZE Limits the number of elements returned in one batch. A cursor typically fetches a batch of result objects and store them locally. If batchSize is positive, it represents the size of each batch of objects retrieved. It can be adjusted to optimize performance and limit data transfer. If batchSize is negative, it will limit of number objects returned, that fit within the max batch size limit (usually 4MB), and cursor will be closed. For example if batchSize is -10, then the server will return a maximum of 10 documents and as many as can fit in 4MB, then close the cursor. Note that this feature is different from limit() in that documents must fit within a maximum size, and it removes the need to send a request to close the cursor server-side. The batch size can be changed even after a cursor is iterated, in which case the setting will apply on the batch retrieval. int/Integer CamelMongoDbAllowDiskUse MongoDbConstants.ALLOW_DISK_USE Sets allowDiskUse MongoDB flag. This is supported since MongoDB Server 4.3.1. Using this header with older MongoDB Server version can cause query to fail. boolean/Boolean 101.8.1.5.3. Example with option outputType=MongoIterable and batch size from("direct:findAll") .setHeader(MongoDbConstants.BATCH_SIZE).constant(10) .setHeader(MongoDbConstants.CRITERIA, constant(Filters.eq("name", "Raul Kripalani"))) .to("mongodb:myDb?database=flights&collection=tickets&operation=findAll&outputType=MongoIterable") .to("mock:resultFindAll"); The findAll operation will also return the following OUT headers to enable you to iterate through result pages if you are using paging: Header key Quick constant Description (extracted from MongoDB API doc) Data type CamelMongoDbResultTotalSize MongoDbConstants.RESULT_TOTAL_SIZE Number of objects matching the query. This does not take limit/skip into consideration. int/Integer CamelMongoDbResultPageSize MongoDbConstants.RESULT_PAGE_SIZE Number of objects matching the query. This does not take limit/skip into consideration. int/Integer Note Supports optional parameters This operation supports projection operators and sort clauses. See Specifying a fields filter (projection) , Specifying a sort clause. 101.8.1.6. count Returns the total number of objects in a collection, returning a Long as the OUT message body. The following example will count the number of records in the "dynamicCollectionName" collection. Notice how dynamicity is enabled, and as a result, the operation will not run against the "notableScientists" collection, but against the "dynamicCollectionName" collection. // from("direct:count").to("mongodb:myDb?database=tickets&collection=flights&operation=count&dynamicity=true"); Long result = template.requestBodyAndHeader("direct:count", "irrelevantBody", MongoDbConstants.COLLECTION, "dynamicCollectionName"); assertTrue("Result is not of type Long", result instanceof Long); You can provide a query The query object is extracted CamelMongoDbCriteria header . if the CamelMongoDbCriteria header is null the query object is extracted message body, i.e. it should be of type Bson or convertible to Bson ., and operation will return the amount of documents matching this criteria. Document query = ... Long count = template.requestBodyAndHeader("direct:count", query, MongoDbConstants.COLLECTION, "dynamicCollectionName"); 101.8.1.7. Specifying a fields filter (projection) Query operations will, by default, return the matching objects in their entirety (with all their fields). If your documents are large and you only require retrieving a subset of their fields, you can specify a field filter in all query operations, simply by setting the relevant Bson (or type convertible to Bson , such as a JSON String, Map, etc.) on the CamelMongoDbFieldsProjection header, constant shortcut: MongoDbConstants.FIELDS_PROJECTION . Here is an example that uses MongoDB's Projections to simplify the creation of Bson. It retrieves all fields except _id and boringField : // route: from("direct:findAll").to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") Bson fieldProjection = Projection.exclude("_id", "boringField"); Object result = template.requestBodyAndHeader("direct:findAll", ObjectUtils.NULL, MongoDbConstants.FIELDS_PROJECTION, fieldProjection); Here is an example that uses MongoDB's Projections to simplify the creation of Bson. It retrieves all fields except _id and boringField : // route: from("direct:findAll").to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") Bson fieldProjection = Projection.exclude("_id", "boringField"); Object result = template.requestBodyAndHeader("direct:findAll", ObjectUtils.NULL, MongoDbConstants.FIELDS_PROJECTION, fieldProjection); 101.8.1.8. Specifying a sort clause There is a often a requirement to fetch the min/max record from a collection based on sorting by a particular field that uses MongoDB's Sorts to simplify the creation of Bson. It retrieves all fields except _id and boringField : // route: from("direct:findAll").to("mongodb:myDb?database=flights&collection=tickets&operation=findAll") Bson sorts = Sorts.descending("_id"); Object result = template.requestBodyAndHeader("direct:findAll", ObjectUtils.NULL, MongoDbConstants.SORT_BY, sorts); In a Camel route the SORT_BY header can be used with the findOneByQuery operation to achieve the same result. If the FIELDS_PROJECTION header is also specified the operation will return a single field/value pair that can be passed directly to another component (for example, a parameterized MyBatis SELECT query). This example demonstrates fetching the temporally newest document from a collection and reducing the result to a single field, based on the documentTimestamp field: .from("direct:someTriggeringEvent") .setHeader(MongoDbConstants.SORT_BY).constant(Sorts.descending("documentTimestamp")) .setHeader(MongoDbConstants.FIELDS_PROJECTION).constant(Projection.include("documentTimestamp")) .setBody().constant("{}") .to("mongodb:myDb?database=local&collection=myDemoCollection&operation=findOneByQuery") .to("direct:aMyBatisParameterizedSelect"); 101.8.2. Create/update operations 101.8.2.1. insert Inserts an new object into the MongoDB collection, taken from the IN message body. Type conversion is attempted to turn it into Document or a List . Two modes are supported: single insert and multiple insert. For multiple insert, the endpoint will expect a List, Array or Collections of objects of any type, as long as they are - or can be converted to - Document . Example: from("direct:insert") .to("mongodb:myDb?database=flights&collection=tickets&operation=insert"); The operation will return a WriteResult, and depending on the WriteConcern or the value of the invokeGetLastError option, getLastError() would have been called already or not. If you want to access the ultimate result of the write operation, you need to retrieve the CommandResult by calling getLastError() or getCachedLastError() on the WriteResult . Then you can verify the result by calling CommandResult.ok() , CommandResult.getErrorMessage() and/or CommandResult.getException() . Note that the new object's _id must be unique in the collection. If you don't specify the value, MongoDB will automatically generate one for you. But if you do specify it and it is not unique, the insert operation will fail (and for Camel to notice, you will need to enable invokeGetLastError or set a WriteConcern that waits for the write result). This is not a limitation of the component, but it is how things work in MongoDB for higher throughput. If you are using a custom _id , you are expected to ensure at the application level that is unique (and this is a good practice too). OID(s) of the inserted record(s) is stored in the message header under CamelMongoOid key ( MongoDbConstants.OID constant). The value stored is org.bson.types.ObjectId for single insert or java.util.List<org.bson.types.ObjectId> if multiple records have been inserted. In MongoDB Java Driver 3.x the insertOne and insertMany operation return void. The Camel insert operation return the Document or List of Documents inserted. Note that each Documents are Updated by a new OID if need. 101.8.2.2. save The save operation is equivalent to an upsert (UPdate, inSERT) operation, where the record will be updated, and if it doesn't exist, it will be inserted, all in one atomic operation. MongoDB will perform the matching based on the _id field. Beware that in case of an update, the object is replaced entirely and the usage of MongoDB's USDmodifiers is not permitted. Therefore, if you want to manipulate the object if it already exists, you have two options: perform a query to retrieve the entire object first along with all its fields (may not be efficient), alter it inside Camel and then save it. use the update operation with USDmodifiers , which will execute the update at the server-side instead. You can enable the upsert flag, in which case if an insert is required, MongoDB will apply the USDmodifiers to the filter query object and insert the result. If the document to be saved does not contain the _id attribute, the operation will be an insert, and the new _id created will be placed in the CamelMongoOid header. For example: from("direct:insert") .to("mongodb:myDb?database=flights&collection=tickets&operation=save"); // route: from("direct:insert").to("mongodb:myDb?database=flights&collection=tickets&operation=save"); org.bson.Document docForSave = new org.bson.Document(); docForSave.put("key", "value"); Object result = template.requestBody("direct:insert", docForSave); 101.8.2.3. update Update one or multiple records on the collection. Requires a filter query and a update rules. You can define the filter using MongoDBConstants.CRITERIA header as Bson and define the update rules as Bson in Body. Note Update after enrich While defining the filter by using MongoDBConstants.CRITERIA header as Bson to query mongodb before you do update, you should notice you need to remove it from the resulting camel exchange during aggregation if you use enrich pattern with a aggregation strategy and then apply mongodb update. If you don't remove this header during aggregation and/or redefine MongoDBConstants.CRITERIA header before sending camel exchange to mongodb producer endpoint, you may end up with invalid camel exchange payload while updating mongodb. The second way Require a List<Bson> as the IN message body containing exactly 2 elements: Element 1 (index 0) ⇒ filter query ⇒ determines what objects will be affected, same as a typical query object Element 2 (index 1) ⇒ update rules ⇒ how matched objects will be updated. All modifier operations from MongoDB are supported. Note Multiupdates By default, MongoDB will only update 1 object even if multiple objects match the filter query. To instruct MongoDB to update all matching records, set the CamelMongoDbMultiUpdate IN message header to true . A header with key CamelMongoDbRecordsAffected will be returned ( MongoDbConstants.RECORDS_AFFECTED constant) with the number of records updated (copied from WriteResult.getN() ). Supports the following IN message headers: Header key Quick constant Description (extracted from MongoDB API doc) Expected type CamelMongoDbMultiUpdate MongoDbConstants.MULTIUPDATE If the update should be applied to all objects matching. See http://www.mongodb.org/display/DOCS/Atomic+Operations boolean/Boolean CamelMongoDbUpsert MongoDbConstants.UPSERT If the database should create the element if it does not exist boolean/Boolean For example, the following will update all records whose filterField field equals true by setting the value of the "scientist" field to "Darwin": // route: from("direct:update").to("mongodb:myDb?database=science&collection=notableScientists&operation=update"); List<Bson> body = new ArrayList<>(); Bson filterField = Filters.eq("filterField", true); body.add(filterField); BsonDocument updateObj = new BsonDocument().append("USDset", new BsonDocument("scientist", new BsonString("Darwin"))); body.add(updateObj); Object result = template.requestBodyAndHeader("direct:update", body, MongoDbConstants.MULTIUPDATE, true); // route: from("direct:update").to("mongodb:myDb?database=science&collection=notableScientists&operation=update"); Maps<String, Object> headers = new HashMap<>(2); headers.add(MongoDbConstants.MULTIUPDATE, true); headers.add(MongoDbConstants.FIELDS_FILTER, Filters.eq("filterField", true)); String updateObj = Updates.set("scientist", "Darwin");; Object result = template.requestBodyAndHeaders("direct:update", updateObj, headers); // route: from("direct:update").to("mongodb:myDb?database=science&collection=notableScientists&operation=update"); String updateObj = "[{\"filterField\": true}, {\"USDset\", {\"scientist\", \"Darwin\"}}]"; Object result = template.requestBodyAndHeader("direct:update", updateObj, MongoDbConstants.MULTIUPDATE, true); 101.8.3. Delete operations 101.8.3.1. remove Remove matching records from the collection. The IN message body will act as the removal filter query, and is expected to be of type DBObject or a type convertible to it. The following example will remove all objects whose field 'conditionField' equals true, in the science database, notableScientists collection: // route: from("direct:remove").to("mongodb:myDb?database=science&collection=notableScientists&operation=remove"); Bson conditionField = Filters.eq("conditionField", true); Object result = template.requestBody("direct:remove", conditionField); A header with key CamelMongoDbRecordsAffected is returned ( MongoDbConstants.RECORDS_AFFECTED constant) with type int , containing the number of records deleted (copied from WriteResult.getN() ). 101.8.4. Bulk Write Operations 101.8.4.1. bulkWrite Performs write operations in bulk with controls for order of execution. Requires a List<WriteModel<Document>> as the IN message body containing commands for insert, update, and delete operations. The following example will insert a new scientist "Pierre Curie", update record with id "5" by setting the value of the "scientist" field to "Marie Curie" and delete record with id "3" : // route: from("direct:bulkWrite").to("mongodb:myDb?database=science&collection=notableScientists&operation=bulkWrite"); List<WriteModel<Document>> bulkOperations = Arrays.asList( new InsertOneModel<>(new Document("scientist", "Pierre Curie")), new UpdateOneModel<>(new Document("_id", "5"), new Document("USDset", new Document("scientist", "Marie Curie"))), new DeleteOneModel<>(new Document("_id", "3"))); BulkWriteResult result = template.requestBody("direct:bulkWrite", bulkOperations, BulkWriteResult.class); By default, operations are executed in order and interrupted on the first write error without processing any remaining write operations in the list. To instruct MongoDB to continue to process remaining write operations in the list, set the CamelMongoDbBulkOrdered IN message header to false . Unordered operations are executed in parallel and this behavior is not guaranteed. Header key Quick constant Description (extracted from MongoDB API doc) Expected type CamelMongoDbBulkOrdered MongoDbConstants.BULK_ORDERED Perform an ordered or unordered operation execution. Defaults to true. boolean/Boolean 101.8.5. Other operations 101.8.5.1. aggregate Perform a aggregation with the given pipeline contained in the body. Aggregations could be long and heavy operations. Use with care. // route: from("direct:aggregate").to("mongodb:myDb?database=science&collection=notableScientists&operation=aggregate"); List<Bson> aggregate = Arrays.asList(match(or(eq("scientist", "Darwin"), eq("scientist", group("USDscientist", sum("count", 1))); from("direct:aggregate") .setBody().constant(aggregate) .to("mongodb:myDb?database=science&collection=notableScientists&operation=aggregate") .to("mock:resultAggregate"); Supports the following IN message headers: Header key Quick constant Description (extracted from MongoDB API doc) Expected type CamelMongoDbBatchSize MongoDbConstants.BATCH_SIZE Sets the number of documents to return per batch. int/Integer CamelMongoDbAllowDiskUse MongoDbConstants.ALLOW_DISK_USE Enable aggregation pipeline stages to write data to temporary files. boolean/Boolean By default a List of all results is returned. This can be heavy on memory depending on the size of the results. A safer alternative is to set your outputType=MongoIterable. The Processor will see an iterable in the message body allowing it to step through the results one by one. Thus setting a batch size and returning an iterable allows for efficient retrieval and processing of the result. An example would look like: List<Bson> aggregate = Arrays.asList(match(or(eq("scientist", "Darwin"), eq("scientist", group("USDscientist", sum("count", 1))); from("direct:aggregate") .setHeader(MongoDbConstants.BATCH_SIZE).constant(10) .setBody().constant(aggregate) .to("mongodb:myDb?database=science&collection=notableScientists&operation=aggregate&outputType=MongoIterable") .split(body()) .streaming() .to("mock:resultAggregate"); Note that calling .split(body()) is enough to send the entries down the route one-by-one, however it would still load all the entries into memory first. Calling .streaming() is thus required to load data into memory by batches. 101.8.5.2. getDbStats Equivalent of running the db.stats() command in the MongoDB shell, which displays useful statistic figures about the database. For example: Usage example: // from("direct:getDbStats").to("mongodb:myDb?database=flights&collection=tickets&operation=getDbStats"); Object result = template.requestBody("direct:getDbStats", "irrelevantBody"); assertTrue("Result is not of type Document", result instanceof Document); The operation will return a data structure similar to the one displayed in the shell, in the form of a Document in the OUT message body. 101.8.5.3. getColStats Equivalent of running the db.collection.stats() command in the MongoDB shell, which displays useful statistic figures about the collection. For example: Usage example: // from("direct:getColStats").to("mongodb:myDb?database=flights&collection=tickets&operation=getColStats"); Object result = template.requestBody("direct:getColStats", "irrelevantBody"); assertTrue("Result is not of type Document", result instanceof Document); The operation will return a data structure similar to the one displayed in the shell, in the form of a Document in the OUT message body. 101.8.5.4. command Run the body as a command on database. Useful for admin operation as getting host information, replication or sharding status. Collection parameter is not use for this operation. // route: from("command").to("mongodb:myDb?database=science&operation=command"); DBObject commandBody = new BasicDBObject("hostInfo", "1"); Object result = template.requestBody("direct:command", commandBody); 101.8.6. Dynamic operations An Exchange can override the endpoint's fixed operation by setting the CamelMongoDbOperation header, defined by the MongoDbConstants.OPERATION_HEADER constant. The values supported are determined by the MongoDbOperation enumeration and match the accepted values for the operation parameter on the endpoint URI. For example: // from("direct:insert").to("mongodb:myDb?database=flights&collection=tickets&operation=insert"); Object result = template.requestBodyAndHeader("direct:insert", "irrelevantBody", MongoDbConstants.OPERATION_HEADER, "count"); assertTrue("Result is not of type Long", result instanceof Long); 101.9. Consumers There are several types of consumers: Tailable Cursor Consumer Change Streams Consumer 101.9.1. Tailable Cursor Consumer MongoDB offers a mechanism to instantaneously consume ongoing data from a collection, by keeping the cursor open just like the tail -f command of *nix systems. This mechanism is significantly more efficient than a scheduled poll, due to the fact that the server pushes new data to the client as it becomes available, rather than making the client ping back at scheduled intervals to fetch new data. It also reduces otherwise redundant network traffic. There is only one requisite to use tailable cursors: the collection must be a "capped collection", meaning that it will only hold N objects, and when the limit is reached, MongoDB flushes old objects in the same order they were originally inserted. For more information, please refer to http://www.mongodb.org/display/DOCS/Tailable+Cursors . The Camel MongoDB component implements a tailable cursor consumer, making this feature available for you to use in your Camel routes. As new objects are inserted, MongoDB will push them as Document in natural order to your tailable cursor consumer, who will transform them to an Exchange and will trigger your route logic. 101.10. How the tailable cursor consumer works To turn a cursor into a tailable cursor, a few special flags are to be signalled to MongoDB when first generating the cursor. Once created, the cursor will then stay open and will block upon calling the MongoCursor.() method until new data arrives. However, the MongoDB server reserves itself the right to kill your cursor if new data doesn't appear after an indeterminate period. If you are interested to continue consuming new data, you have to regenerate the cursor. And to do so, you will have to remember the position where you left off or else you will start consuming from the top again. The Camel MongoDB tailable cursor consumer takes care of all these tasks for you. You will just need to provide the key to some field in your data of increasing nature, which will act as a marker to position your cursor every time it is regenerated, e.g. a timestamp, a sequential ID, etc. It can be of any datatype supported by MongoDB. Date, Strings and Integers are found to work well. We call this mechanism "tail tracking" in the context of this component. The consumer will remember the last value of this field and whenever the cursor is to be regenerated, it will run the query with a filter like: increasingField > lastValue , so that only unread data is consumed. Setting the increasing field: Set the key of the increasing field on the endpoint URI tailTrackingIncreasingField option. In Camel 2.10, it must be a top-level field in your data, as nested navigation for this field is not yet supported. That is, the "timestamp" field is okay, but "nested.timestamp" will not work. Please open a ticket in the Camel JIRA if you do require support for nested increasing fields. Cursor regeneration delay: One thing to note is that if new data is not already available upon initialisation, MongoDB will kill the cursor instantly. Since we don't want to overwhelm the server in this case, a cursorRegenerationDelay option has been introduced (with a default value of 1000ms.), which you can modify to suit your needs. An example: from("mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime") .id("tailableCursorConsumer1") .autoStartup(false) .to("mock:test"); The above route will consume from the "flights.cancellations" capped collection, using "departureTime" as the increasing field, with a default regeneration cursor delay of 1000ms. 101.11. Persistent tail tracking Standard tail tracking is volatile and the last value is only kept in memory. However, in practice you will need to restart your Camel container every now and then, but your last value would then be lost and your tailable cursor consumer would start consuming from the top again, very likely sending duplicate records into your route. To overcome this situation, you can enable the persistent tail tracking feature to keep track of the last consumed increasing value in a special collection inside your MongoDB database too. When the consumer initialises again, it will restore the last tracked value and continue as if nothing happened. The last read value is persisted on two occasions: every time the cursor is regenerated and when the consumer shuts down. We may consider persisting at regular intervals too in the future (flush every 5 seconds) for added robustness if the demand is there. To request this feature, please open a ticket in the Camel JIRA. 101.12. Enabling persistent tail tracking To enable this function, set at least the following options on the endpoint URI: persistentTailTracking option to true persistentId option to a unique identifier for this consumer, so that the same collection can be reused across many consumers Additionally, you can set the tailTrackDb , tailTrackCollection and tailTrackField options to customise where the runtime information will be stored. Refer to the endpoint options table at the top of this page for descriptions of each option. For example, the following route will consume from the "flights.cancellations" capped collection, using "departureTime" as the increasing field, with a default regeneration cursor delay of 1000ms, with persistent tail tracking turned on, and persisting under the "cancellationsTracker" id on the "flights.camelTailTracking", storing the last processed value under the "lastTrackingValue" field ( camelTailTracking and lastTrackingValue are defaults). from("mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime&persistentTailTracking=true" + "&persistentId=cancellationsTracker") .id("tailableCursorConsumer2") .autoStartup(false) .to("mock:test"); Below is another example identical to the one above, but where the persistent tail tracking runtime information will be stored under the "trackers.camelTrackers" collection, in the "lastProcessedDepartureTime" field: from("mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime&persistentTailTracking=true" + "&persistentId=cancellationsTracker&tailTrackDb=trackers&tailTrackCollection=camelTrackers" + "&tailTrackField=lastProcessedDepartureTime") .id("tailableCursorConsumer3") .autoStartup(false) .to("mock:test"); 101.12.1. Change Streams Consumer Change Streams allow applications to access real-time data changes without the complexity and risk of tailing the MongoDB oplog. Applications can use change streams to subscribe to all data changes on a collection and immediately react to them. Because change streams use the aggregation framework, applications can also filter for specific changes or transform the notifications at will. The exchange body will contain the full document of any change. To configure Change Streams Consumer you need to specify consumerType , database , collection and optional JSON property streamFilter to filter events. That JSON property is standard MongoDB USDmatch aggregation. It could be easily specified using XML DSL configuration: <route id="filterConsumer"> <from uri="mongodb:myDb?consumerType=changeStreams&database=flights&collection=tickets&streamFilter={ 'USDmatch':{'USDor':[{'fullDocument.stringValue': 'specificValue'}]} }"/> <to uri="mock:test"/> </route> Java configuration: from("mongodb:myDb?consumerType=changeStreams&database=flights&collection=tickets&streamFilter={ 'USDmatch':{'USDor':[{'fullDocument.stringValue': 'specificValue'}]} }") .to("mock:test"); Note You can externalize the streamFilter value into a property placeholder which allows the endpoint URI parameters to be cleaner and easier to read. The changeStreams consumer type will also return the following OUT headers: Header key Quick constant Description (extracted from MongoDB API doc) Data type CamelMongoDbStreamOperationType MongoDbConstants.STREAM_OPERATION_TYPE The type of operation that occurred. Can be any of the following values: insert, delete, replace, update, drop, rename, dropDatabase, invalidate. String _id MongoDbConstants.MONGO_ID A document that contains the _id of the document created or modified by the insert, replace, delete, update operations (i.e. CRUD operations). For sharded collections, also displays the full shard key for the document. The _id field is not repeated if it is already a part of the shard key. ObjectId 101.13. Type conversions The MongoDbBasicConverters type converter included with the camel-mongodb component provides the following conversions: Name From type To type How? fromMapToDocument Map Document constructs a new Document via the new Document(Map m) constructor. fromDocumentToMap Document Map Document already implements Map . fromStringToDocument String Document uses com.mongodb.Document.parse(String s) . fromStringToObjectId String ObjectId constructs a new ObjectId via the new ObjectId(s) fromFileToDocument File Document uses fromInputStreamToDocument under the hood fromInputStreamToDocument InputStream Document converts the inputstream bytes to a Document fromStringToList String List<Bson> uses org.bson.codecs.configuration.CodecRegistries to convert to BsonArray then to List<Bson>. This type converter is auto-discovered, so you don't need to configure anything manually. 101.14. Spring Boot Auto-Configuration The component supports 5 options, which are listed below. Name Description Default Type camel.component.mongodb.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.mongodb.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.mongodb.enabled Whether to enable auto configuration of the mongodb component. This is enabled by default. Boolean camel.component.mongodb.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.mongodb.mongo-connection Shared client used for connection. All endpoints generated from the component will share this connection client. The option is a com.mongodb.client.MongoClient type. MongoClient | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mongodb-starter</artifactId> </dependency>",
"mongodb:connectionBean?database=databaseName&collection=collectionName&operation=operationName[&moreOptions...]",
"mongodb:connectionBean",
"<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:context=\"http://www.springframework.org/schema/context\" xmlns:mongo=\"http://www.springframework.org/schema/data/mongo\" xsi:schemaLocation=\"http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/data/mongo http://www.springframework.org/schema/data/mongo/spring-mongo.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd\"> <mongo:mongo-client id=\"mongoBean\" host=\"USD{mongo.url}\" port=\"USD{mongo.port}\" credentials=\"USD{mongo.user}:USD{mongo.pass}@USD{mongo.dbname}\"> <mongo:client-options write-concern=\"NORMAL\" /> </mongo:mongo-client> </beans>",
"<route> <from uri=\"direct:start\" /> <!-- using bean 'mongoBean' defined above --> <to uri=\"mongodb:mongoBean?database=USD{mongodb.database}&collection=USD{mongodb.collection}&operation=getDbStats\" /> <to uri=\"direct:result\" /> </route>",
"from(\"direct:findById\") .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findById\") .to(\"mock:resultFindById\");",
"from(\"direct:findById\") .convertBodyTo(ObjectId.class) .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findById\") .to(\"mock:resultFindById\");",
"from(\"direct:findOneByQuery\") .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findOneByQuery\") .to(\"mock:resultFindOneByQuery\");",
"from(\"direct:findOneByQuery\") .setHeader(MongoDbConstants.CRITERIA, constant(Filters.eq(\"name\", \"Raul Kripalani\"))) .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findOneByQuery\") .to(\"mock:resultFindOneByQuery\");",
"from(\"direct:findAll\") .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findAll\") .to(\"mock:resultFindAll\");",
"from(\"direct:findAll\") .setHeader(MongoDbConstants.CRITERIA, Filters.eq(\"name\", \"Raul Kripalani\")) .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findAll\") .to(\"mock:resultFindAll\");",
"from(\"direct:findAll\") .setHeader(MongoDbConstants.BATCH_SIZE).constant(10) .setHeader(MongoDbConstants.CRITERIA, constant(Filters.eq(\"name\", \"Raul Kripalani\"))) .to(\"mongodb:myDb?database=flights&collection=tickets&operation=findAll&outputType=MongoIterable\") .to(\"mock:resultFindAll\");",
"// from(\"direct:count\").to(\"mongodb:myDb?database=tickets&collection=flights&operation=count&dynamicity=true\"); Long result = template.requestBodyAndHeader(\"direct:count\", \"irrelevantBody\", MongoDbConstants.COLLECTION, \"dynamicCollectionName\"); assertTrue(\"Result is not of type Long\", result instanceof Long);",
"Document query = Long count = template.requestBodyAndHeader(\"direct:count\", query, MongoDbConstants.COLLECTION, \"dynamicCollectionName\");",
"// route: from(\"direct:findAll\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=findAll\") Bson fieldProjection = Projection.exclude(\"_id\", \"boringField\"); Object result = template.requestBodyAndHeader(\"direct:findAll\", ObjectUtils.NULL, MongoDbConstants.FIELDS_PROJECTION, fieldProjection);",
"// route: from(\"direct:findAll\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=findAll\") Bson fieldProjection = Projection.exclude(\"_id\", \"boringField\"); Object result = template.requestBodyAndHeader(\"direct:findAll\", ObjectUtils.NULL, MongoDbConstants.FIELDS_PROJECTION, fieldProjection);",
"// route: from(\"direct:findAll\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=findAll\") Bson sorts = Sorts.descending(\"_id\"); Object result = template.requestBodyAndHeader(\"direct:findAll\", ObjectUtils.NULL, MongoDbConstants.SORT_BY, sorts);",
".from(\"direct:someTriggeringEvent\") .setHeader(MongoDbConstants.SORT_BY).constant(Sorts.descending(\"documentTimestamp\")) .setHeader(MongoDbConstants.FIELDS_PROJECTION).constant(Projection.include(\"documentTimestamp\")) .setBody().constant(\"{}\") .to(\"mongodb:myDb?database=local&collection=myDemoCollection&operation=findOneByQuery\") .to(\"direct:aMyBatisParameterizedSelect\");",
"from(\"direct:insert\") .to(\"mongodb:myDb?database=flights&collection=tickets&operation=insert\");",
"from(\"direct:insert\") .to(\"mongodb:myDb?database=flights&collection=tickets&operation=save\");",
"// route: from(\"direct:insert\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=save\"); org.bson.Document docForSave = new org.bson.Document(); docForSave.put(\"key\", \"value\"); Object result = template.requestBody(\"direct:insert\", docForSave);",
"// route: from(\"direct:update\").to(\"mongodb:myDb?database=science&collection=notableScientists&operation=update\"); List<Bson> body = new ArrayList<>(); Bson filterField = Filters.eq(\"filterField\", true); body.add(filterField); BsonDocument updateObj = new BsonDocument().append(\"USDset\", new BsonDocument(\"scientist\", new BsonString(\"Darwin\"))); body.add(updateObj); Object result = template.requestBodyAndHeader(\"direct:update\", body, MongoDbConstants.MULTIUPDATE, true);",
"// route: from(\"direct:update\").to(\"mongodb:myDb?database=science&collection=notableScientists&operation=update\"); Maps<String, Object> headers = new HashMap<>(2); headers.add(MongoDbConstants.MULTIUPDATE, true); headers.add(MongoDbConstants.FIELDS_FILTER, Filters.eq(\"filterField\", true)); String updateObj = Updates.set(\"scientist\", \"Darwin\");; Object result = template.requestBodyAndHeaders(\"direct:update\", updateObj, headers);",
"// route: from(\"direct:update\").to(\"mongodb:myDb?database=science&collection=notableScientists&operation=update\"); String updateObj = \"[{\\\"filterField\\\": true}, {\\\"USDset\\\", {\\\"scientist\\\", \\\"Darwin\\\"}}]\"; Object result = template.requestBodyAndHeader(\"direct:update\", updateObj, MongoDbConstants.MULTIUPDATE, true);",
"// route: from(\"direct:remove\").to(\"mongodb:myDb?database=science&collection=notableScientists&operation=remove\"); Bson conditionField = Filters.eq(\"conditionField\", true); Object result = template.requestBody(\"direct:remove\", conditionField);",
"// route: from(\"direct:bulkWrite\").to(\"mongodb:myDb?database=science&collection=notableScientists&operation=bulkWrite\"); List<WriteModel<Document>> bulkOperations = Arrays.asList( new InsertOneModel<>(new Document(\"scientist\", \"Pierre Curie\")), new UpdateOneModel<>(new Document(\"_id\", \"5\"), new Document(\"USDset\", new Document(\"scientist\", \"Marie Curie\"))), new DeleteOneModel<>(new Document(\"_id\", \"3\"))); BulkWriteResult result = template.requestBody(\"direct:bulkWrite\", bulkOperations, BulkWriteResult.class);",
"// route: from(\"direct:aggregate\").to(\"mongodb:myDb?database=science&collection=notableScientists&operation=aggregate\"); List<Bson> aggregate = Arrays.asList(match(or(eq(\"scientist\", \"Darwin\"), eq(\"scientist\", group(\"USDscientist\", sum(\"count\", 1))); from(\"direct:aggregate\") .setBody().constant(aggregate) .to(\"mongodb:myDb?database=science&collection=notableScientists&operation=aggregate\") .to(\"mock:resultAggregate\");",
"List<Bson> aggregate = Arrays.asList(match(or(eq(\"scientist\", \"Darwin\"), eq(\"scientist\", group(\"USDscientist\", sum(\"count\", 1))); from(\"direct:aggregate\") .setHeader(MongoDbConstants.BATCH_SIZE).constant(10) .setBody().constant(aggregate) .to(\"mongodb:myDb?database=science&collection=notableScientists&operation=aggregate&outputType=MongoIterable\") .split(body()) .streaming() .to(\"mock:resultAggregate\");",
"> db.stats(); { \"db\" : \"test\", \"collections\" : 7, \"objects\" : 719, \"avgObjSize\" : 59.73296244784423, \"dataSize\" : 42948, \"storageSize\" : 1000058880, \"numExtents\" : 9, \"indexes\" : 4, \"indexSize\" : 32704, \"fileSize\" : 1275068416, \"nsSizeMB\" : 16, \"ok\" : 1 }",
"// from(\"direct:getDbStats\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=getDbStats\"); Object result = template.requestBody(\"direct:getDbStats\", \"irrelevantBody\"); assertTrue(\"Result is not of type Document\", result instanceof Document);",
"> db.camelTest.stats(); { \"ns\" : \"test.camelTest\", \"count\" : 100, \"size\" : 5792, \"avgObjSize\" : 57.92, \"storageSize\" : 20480, \"numExtents\" : 2, \"nindexes\" : 1, \"lastExtentSize\" : 16384, \"paddingFactor\" : 1, \"flags\" : 1, \"totalIndexSize\" : 8176, \"indexSizes\" : { \"_id_\" : 8176 }, \"ok\" : 1 }",
"// from(\"direct:getColStats\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=getColStats\"); Object result = template.requestBody(\"direct:getColStats\", \"irrelevantBody\"); assertTrue(\"Result is not of type Document\", result instanceof Document);",
"// route: from(\"command\").to(\"mongodb:myDb?database=science&operation=command\"); DBObject commandBody = new BasicDBObject(\"hostInfo\", \"1\"); Object result = template.requestBody(\"direct:command\", commandBody);",
"// from(\"direct:insert\").to(\"mongodb:myDb?database=flights&collection=tickets&operation=insert\"); Object result = template.requestBodyAndHeader(\"direct:insert\", \"irrelevantBody\", MongoDbConstants.OPERATION_HEADER, \"count\"); assertTrue(\"Result is not of type Long\", result instanceof Long);",
"from(\"mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime\") .id(\"tailableCursorConsumer1\") .autoStartup(false) .to(\"mock:test\");",
"from(\"mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime&persistentTailTracking=true\" + \"&persistentId=cancellationsTracker\") .id(\"tailableCursorConsumer2\") .autoStartup(false) .to(\"mock:test\");",
"from(\"mongodb:myDb?database=flights&collection=cancellations&tailTrackIncreasingField=departureTime&persistentTailTracking=true\" + \"&persistentId=cancellationsTracker&tailTrackDb=trackers&tailTrackCollection=camelTrackers\" + \"&tailTrackField=lastProcessedDepartureTime\") .id(\"tailableCursorConsumer3\") .autoStartup(false) .to(\"mock:test\");",
"<route id=\"filterConsumer\"> <from uri=\"mongodb:myDb?consumerType=changeStreams&database=flights&collection=tickets&streamFilter={ 'USDmatch':{'USDor':[{'fullDocument.stringValue': 'specificValue'}]} }\"/> <to uri=\"mock:test\"/> </route>",
"from(\"mongodb:myDb?consumerType=changeStreams&database=flights&collection=tickets&streamFilter={ 'USDmatch':{'USDor':[{'fullDocument.stringValue': 'specificValue'}]} }\") .to(\"mock:test\");"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-mongodb-component-starter |
Chapter 5. Configuring memory on Compute nodes | Chapter 5. Configuring memory on Compute nodes Warning The content for this feature is available in this release as a Documentation Preview , and therefore is not fully verified by Red Hat. Use it only for testing, and do not use in a production environment. As a cloud administrator, you can configure the scheduling and placement of instances for optimal performance by creating customized flavors to target specialized workloads, including NFV and High Performance Computing (HPC). Use the following features to tune your instances: Overallocation : Tune the virtual RAM to physical RAM allocation ratio. Swap : Tune the allocated swap size to handle memory overcommit. Huge pages : Tune instance memory allocation policies both for normal memory (4k pages) and huge pages (2 MB or 1 GB pages). File-backed memory : Use to expand your Compute node memory capacity. SEV : Use to enable your cloud users to create instances that use memory encryption. 5.1. Configuring memory for overallocation When you use memory overcommit ( ram_allocation_ratio >= 1.0), you need to deploy your overcloud with enough swap space to support the allocation ratio. Note If your ram_allocation_ratio parameter is set to < 1 , follow the RHEL recommendations for swap size. For more information, see Recommended system swap space in the RHEL Managing Storage Devices guide. Prerequisites You have calculated the swap size your node requires. For more information, see Calculating swap size . Procedure Open the OpenStackDataPlaneNodeSet CR definition file for the node set you want to update, for example, my_data_plane_node_set.yaml . Add the required configuration or modify the existing configuration under ansibleVars : Save the OpenStackDataPlaneNodeSet CR definition file. Apply the updated OpenStackDataPlaneNodeSet CR configuration: Verify that the data plane resource has been updated: Sample output: Create a file on your workstation to define the OpenStackDataPlaneDeployment CR, for example, my_data_plane_deploy.yaml : Tip Give the definition file and the OpenStackDataPlaneDeployment CR a unique and descriptive name that indicates the purpose of the modified node set. Add the OpenStackDataPlaneNodeSet CR that you modified: Save the OpenStackDataPlaneDeployment CR deployment file. Deploy the modified OpenStackDataPlaneNodeSet CR: You can view the Ansible logs while the deployment executes: Verify that the modified OpenStackDataPlaneNodeSet CR is deployed: Example: Sample output Repeat the oc get command until you see the NodeSet Ready message: Example Sample output: For information on the meaning of the returned status, see Data plane conditions and states in the Deploying Red Hat OpenStack Services on OpenShift . 5.2. Calculating reserved host memory on Compute nodes To determine the total amount of RAM to reserve for host processes, you need to allocate enough memory for each of the following: The resources that run on the host, for example, Ceph Object Storage Daemon (OSD) consumes 3 GB of memory. The emulator overhead required to host instances. The hypervisor for each instance. You can use the following formula to calculate the amount of memory to reserve for host processes on each node: Replace vm_no with the number of instances. Replace avg_instance_size with the average amount of memory each instance can use. Replace overhead with the hypervisor overhead required for each instance. Replace resource1 and all resources up to <resourceN> with the number of a resource type on the node. Replace resource_ram with the amount of RAM each resource of this type requires. Note If this host will run workloads with a guest NUMA topology, for example, instances with CPU pinning, huge pages, or an explicit NUMA topology specified in the flavor, you must use the reserved_huge_pages configuration option to reserve the memory per NUMA node as 4096 pages. For information about how to calculate the reserved_host_memory_mb value, see Calculating reserved host memory on Compute nodes . 5.3. Calculating swap size The allocated swap size must be large enough to handle any memory overcommit. You can use the following formulas to calculate the swap size your node requires: overcommit_ratio = ram_allocation_ratio - 1 Minimum swap size (MB) = (total_RAM * overcommit_ratio) + RHEL_min_swap Recommended (maximum) swap size (MB) = total_RAM * (overcommit_ratio + percentage_of_RAM_to_use_for_swap) The percentage_of_RAM_to_use_for_swap variable creates a buffer to account for QEMU overhead and any other resources consumed by the operating system or host services. For instance, to use 25% of the available RAM for swap, with 64GB total RAM, and ram_allocation_ratio set to 1 : Recommended (maximum) swap size = 64000 MB * (0 + 0.25) = 16000 MB For information about how to determine the RHEL_min_swap value, see Recommended system swap space in the RHEL Managing Storage Devices guide. 5.4. Configuring huge pages on Compute nodes As a cloud administrator, you can configure Compute nodes to enable instances to request huge pages. Note Configuring huge pages creates an implicit NUMA topology on the instance even if a NUMA topology is not requested. Do not run NUMA and non-NUMA virtual machines (VMs) on the same hosts. Prerequisites The oc command line tool is installed on your workstation. You are logged in to Red Hat OpenStack Services on OpenShift (RHOSO) as a user with cluster-admin privileges. You have selected the OpenStackDataPlaneNodeSet CR that defines the nodes that can enable instances to request huge pages. For more information about creating an OpenStackDataPlaneNodeSet CR, see Creating an OpenStackDataPlaneNodeSet CR with pre-provisioned nodes in the Deploying Red Hat OpenStack Services on OpenShift guide. Procedure Create or update the ConfigMap CR named nova-extra-config.yaml and set the values of the parameters under [default] and [libvirt]: 1 The name of the new Compute configuration file. The nova-operator generates the default configuration file with the name 01-nova.conf . Do not use the default name, because it would override the infrastructure configuration, such as the transport_url . The nova-compute service applies every file under /etc/nova/nova.conf.d/ in lexicographical order, therefore configurations defined in later files override the same configurations defined in an earlier file. Note Do not configure CPU feature flags to allow instances to only request 2 MB huge pages. You can only allocate 1G huge pages to an instance if the host supports 1G huge page allocation. You only need to set cpu_model_extra_flags to pdpe1gb when cpu_mode is set to host-model or custom . If the host supports pdpe1gb , and host-passthrough is used as the cpu_mode , then you do not need to set pdpe1gb as a cpu_model_extra_flags . Note The pdpe1gb flag is only included in Opteron_G4 and Opteron_G5 CPU models, it is not included in any of the Intel CPU models supported by QEMU. To mitigate CPU hardware issues, such as Microarchitectural Data Sampling (MDS), you might need to configure other CPU flags. For more information, see RHOS Mitigation for MDS ("Microarchitectural Data Sampling") Security Flaws. For more information about creating ConfigMap objects, see Creating and using config maps . Create a new OpenStackDataPlaneDeployment CR to configure the services on the data plane nodes and deploy the data plane, and save it to a file named compute_huge_pages_deploy.yaml on your workstation: In the compute_huge_pages_deploy.yaml , specify nodeSets to include all the OpenStackDataPlaneNodeSet CRs you want to deploy. Ensure that you include the OpenStackDataPlaneNodeSet CR that you selected as a prerequisite. That OpenStackDataPlaneNodeSet CR defines the nodes you want to designate for huge pages. Warning You can configure only whole node sets. Reconfiguring a subset of the nodes within a node set is not supported. If you need to reconfigure a subset of nodes within a node set, you must scale the node set down, and create a new node set from the previously removed nodes. Warning If your deployment has more than one node set, changes to the nova-extra-config.yaml ConfigMap might directly affect more than one node set. To check if a node set uses the nova-extra-config.yaml ConfigMap and therefore will be affected by the reconfiguration, complete the following steps: Navigate to the services list of the node set and find the name of the DataPlaneService` that points to nova. Ensure that the value of the edpmServiceType field of the DataPlaneService is set to nova . If the dataSources list of the DataPlaneService contains a configMapRef named nova-extra-config , then this node set uses this ConfigMap and therefore will be affected by the configuration changes in this ConfigMap . If some of the node sets that are affected should not be reconfigured, you must create a new DataPlaneService pointing to a separate ConfigMap for these node sets. Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment. Save the compute_huge_pages_deploy.yaml deployment file. Deploy the data plane: Verify that the data plane is deployed: Access the remote shell for openstackclient and verify that the deployed Compute nodes are visible on the control plane: 5.4.1. Creating a huge pages flavor for instances To enable your cloud users to create instances that use huge pages, you can create a flavor with the hw:mem_page_size extra spec key for launching instances. Note To execute openstack client commands on the cloud you must specify the name of the cloud detailed in your clouds.yaml file. You can specify the name of the cloud by using one of the following methods: Use the --os-cloud option with each command: Use this option if you access more than one cloud. Create an environment variable for the cloud name in your bashrc file: Prerequisites The Compute node is configured for huge pages. For more information, see Configuring huge pages on Compute nodes . Procedure Create a flavor for instances that require huge pages: To request huge pages, set the hw:mem_page_size property of the flavor to the required size: Replace <page_size> with one of the following valid values: large : Selects the largest page size supported on the host, which may be 2 MB or 1 GB on x86_64 systems. small : (Default) Selects the smallest page size supported on the host. On x86_64 systems this is 4 kB (normal pages). any : Selects the page size by using the hw_mem_page_size set on the image. If the page size is not specified by the image, selects the largest available page size, as determined by the libvirt driver. <pagesize> : Set an explicit page size if the workload has specific requirements. Use an integer value for the page size in KB, or any standard suffix. For example: 4KB, 2MB, 2048, 1GB. To verify the flavor creates an instance with huge pages, use your new flavor to launch an instance: The Compute scheduler identifies a host with enough free huge pages of the required size to back the memory of the instance. If the scheduler is unable to find a host and NUMA node with enough pages, then the request will fail with a NoValidHost error. 5.4.2. Mounting multiple huge page folders during first boot You can configure the Compute service (nova) to handle multiple page sizes as part of the first boot process. Procedure Open the OpenStackDataPlaneNodeSet CR definition file for the node set you want to update, for example, my_data_plane_node_set.yaml . Add the required configuration or modify the existing configuration in the edpm_default_mounts template under ansibleVars : Save the OpenStackDataPlaneNodeSet CR definition file. Apply the updated OpenStackDataPlaneNodeSet CR configuration: Verify that the data plane resource has been updated: Sample output: Create a file on your workstation to define the OpenStackDataPlaneDeployment CR, for example, my_data_plane_deploy.yaml : Tip Give the definition file and the OpenStackDataPlaneDeployment CR a unique and descriptive name that indicates the purpose of the modified node set. Add the OpenStackDataPlaneNodeSet CR that you modified: Save the OpenStackDataPlaneDeployment CR deployment file. Deploy the modified OpenStackDataPlaneNodeSet CR: To view the Ansible logs while the deployment executes, enter the following command: Verify that the modified OpenStackDataPlaneNodeSet CR is deployed: Example: Sample output: Repeat the oc get command until you see the NodeSet Ready message: Example: Sample output: For information on the meaning of the returned status, see Data plane conditions and states in the Deploying Red Hat OpenStack Services on OpenShift . 5.5. Configuring Compute nodes to use file-backed memory for instances You can use file-backed memory to expand your Compute node memory capacity. In this case, you allocate files within the libvirt memory backing directory to be instance memory. You can configure the amount of the host disk that is available for instance memory, and the location on the disk of the instance memory files. The Compute (nova) service reports the capacity configured for file-backed memory to the Placement service as the total system memory capacity. This allows the Compute node to host more instances than would normally fit within the system memory. To use file-backed memory for instances, you must enable file-backed memory on the Compute node. Limitations You cannot live migrate instances between Compute nodes that have file-backed memory enabled and Compute nodes that do not have file-backed memory enabled. File-backed memory is not compatible with huge pages. Instances that use huge pages cannot start on a Compute node with file-backed memory enabled. Use host aggregates to ensure that instances that use huge pages are not placed on Compute nodes with file-backed memory enabled. File-backed memory is not compatible with memory overcommit. You cannot reserve memory for host processes using reserved_host_memory_mb . When file-backed memory is in use, reserved memory corresponds to disk space not set aside for file-backed memory. File-backed memory is reported to the Placement service as the total system memory, with RAM used as cache memory. Prerequisites ram_allocation_ratio must be set to "1.0" on the node and any host aggregate the node is added to. reserved_host_memory_mb must be set to "0". The oc command line tool is installed on your workstation. You are logged in to Red Hat OpenStack Services on OpenShift (RHOSO) as a user with cluster-admin privileges. You have selected the OpenStackDataPlaneNodeSet CR that defines which nodes use file-backed memory for instances. For more information about creating an OpenStackDataPlaneNodeSet CR, see Creating an OpenStackDataPlaneNodeSet CR with pre-provisioned nodes in the Deploying Red Hat OpenStack Services on OpenShift guide. Procedure Create or update the ConfigMap CR named nova-extra-config.yaml and set the values of the parameters under [libvirt]: For more information about creating ConfigMap objects, see Creating and using config maps . Optional: To configure the directory to store the memory backing files, set the memory_backing_dir parameter. The default memory backing directory is /var/lib/libvirt/qemu/ram/ : Replace <new_directory_location> with the location of the memory backing directory. Note You must locate your backing store in a directory at or above the default directory location, /var/lib/libvirt/qemu/ram/ . You can also change the host disk for the backing store. For more information, see Changing the memory backing directory host disk . Create a new OpenStackDataPlaneDeployment CR to configure the services on the data plane nodes and deploy the data plane, and save it to a file named compute_file_backed_memory_deploy.yaml on your workstation: In the compute_file_backed_memory_deploy.yaml , specify nodeSets to include all the OpenStackDataPlaneNodeSet CRs you want to deploy. Ensure that you include the OpenStackDataPlaneNodeSet CR that you selected as a prerequisite. That OpenStackDataPlaneNodeSet CR defines the nodes you want to designate for file-backed memory. Warning You can configure only whole node sets. Reconfiguring a subset of the nodes within a node set is not supported. If you need to reconfigure a subset of nodes within a node set, you must scale the node set down, and create a new node set from the previously removed nodes. Warning If your deployment has more than one node set, changes to the nova-extra-config.yaml ConfigMap might directly affect more than one node set, depending on how the node sets and the DataPlaneServices are configured. To check if a node set uses the nova-extra-config ConfigMap and therefore will be affected by the reconfiguration, complete the following steps: Check the services list of the node set and find the name of the DataPlaneService that points to nova. Ensure that the value of the edpmServiceType field of the DataPlaneService is set to nova . If the dataSources list of the DataPlaneService contains a configMapRef named nova-extra-config , then this node set uses this ConfigMap and therefore will be affected by the configuration changes in this ConfigMap . If some of the node sets that are affected should not be reconfigured, you must create a new DataPlaneService pointing to a separate ConfigMap for these node sets. Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment. Save the compute_file_backed_memory_deploy.yaml deployment file. Deploy the data plane: Verify that the data plane is deployed: Access the remote shell for openstackclient and verify that the deployed Compute nodes are visible on the control plane: 5.5.1. Changing the memory backing directory host disk You can move the memory backing directory from the default primary disk location to an alternative disk. Procedure Create a file system on the alternative backing device. For example, enter the following command to create an ext4 filesystem on /dev/sdb : Mount the backing device. For example, enter the following command to mount /dev/sdb on the default libvirt memory backing directory: Note The mount point must match the value of the QemuMemoryBackingDir parameter. 5.6. Configuring AMD SEV Compute nodes to provide memory encryption for instances Secure Encrypted Virtualization (SEV) hardware, provided by AMD, protects the data in DRAM that a running virtual machine instance is using. SEV encrypts the memory of each instance with a unique key. As a cloud administrator, you can provide cloud users the ability to create instances that run on SEV-capable Compute nodes with memory encryption enabled. This feature is available to use from the 2nd Gen AMD EPYCTM 7002 Series ("Rome"). To enable your cloud users to create instances that use memory encryption, you must perform the following tasks: Designate the AMD SEV Compute nodes for memory encryption. Configure the Compute nodes for memory encryption. Deploy the data plane. Create a flavor or image for launching instances with memory encryption. Tip If the AMD SEV hardware is limited, you can also configure a host aggregate to optimize scheduling on the AMD SEV Compute nodes. To schedule only instances that request memory encryption on the AMD SEV Compute nodes, create a host aggregate of the Compute nodes that have the AMD SEV hardware, and configure the Compute scheduler to place only instances that request memory encryption on the host aggregate. For more information, see Creating and managing host aggregates and Filtering by isolating host aggregates . 5.6.1. Secure Encrypted Virtualization (SEV) Secure Encrypted Virtualization (SEV), provided by AMD, protects the data in DRAM that a running virtual machine instance is using. SEV encrypts the memory of each instance with a unique key. SEV increases security when you use non-volatile memory technology (NVDIMM), because an NVDIMM chip can be physically removed from a system with the data intact, similar to a hard drive. Without encryption, any stored information such as sensitive data, passwords, or secret keys can be compromised. For more information, see the AMD Secure Encrypted Virtualization (SEV) documentation. Limitations of instances with memory encryption You cannot live migrate, or suspend and resume instances with memory encryption. You cannot use PCI passthrough to directly access devices on instances with memory encryption. You cannot use virtio-blk as the boot disk of instances with memory encryption with Red Hat Enterprise Linux (RHEL) kernels earlier than kernel-4.18.0-115.el8 (RHEL-8.1.0). Note You can use virtio-scsi or SATA as the boot disk, or virtio-blk for non-boot disks. The operating system that runs in an encrypted instance must provide SEV support. For more information, see the Red Hat Knowledgebase solution Enabling AMD Secure Encrypted Virtualization in RHEL 8 . Machines that support SEV have a limited number of slots in their memory controller for storing encryption keys. Each running instance with encrypted memory consumes one of these slots. Therefore, the number of instances with memory encryption that can run concurrently is limited to the number of slots in the memory controller. For example, on 1st Gen AMD EPYCTM 7001 Series ("Naples") the limit is 16, and on 2nd Gen AMD EPYCTM 7002 Series ("Rome") the limit is 255. Instances with memory encryption pin pages in RAM. The Compute service cannot swap these pages, therefore you cannot overcommit memory on a Compute node that hosts instances with memory encryption. You cannot use memory encryption with instances that have multiple NUMA nodes. 5.6.2. Designating AMD SEV Compute nodes for memory encryption To designate AMD SEV Compute nodes for instances that use memory encryption, you must create a new node set to configure the AMD SEV role, and configure the bare metal nodes with an AMD SEV resource class to use to tag the Compute nodes for memory encryption. Note The following procedure applies to new data plane nodes that have not yet been provisioned. To assign a resource class to an existing node that has already been provisioned, you must use the scale down procedure to unprovision the node, then use the scale up procedure to reprovision the node with the new resource class assignment. For more information, see Configuring a node set for a feature or workload in Customizing the Red Hat OpenStack Services on OpenShift deployment . 5.6.3. Configuring AMD SEV Compute nodes for memory encryption To enable your cloud users to create instances that use memory encryption, you must configure the Compute nodes that have the AMD SEV hardware. Prerequisites The oc command line tool is installed on your workstation. You are logged in to Red Hat OpenStack Services on OpenShift (RHOSO) as a user with cluster-admin privileges. You have selected the OpenStackDataPlaneNodeSet CR that defines the nodes you want to configure CPU pinning on. For more information about creating an OpenStackDataPlaneNodeSet CR, see Creating an OpenStackDataPlaneNodeSet CR with pre-provisioned nodes in the Deploying Red Hat OpenStack Services on OpenShift guide. Your deployment must include a Compute node that runs on AMD hardware capable of supporting SEV, such as an AMD EPYC CPU. You can use the following command to determine if your deployment is SEV-capable: Procedure Create or update the ConfigMap CR named nova-extra-config.yaml and set the values of the parameters under [libvirt]: Note The default value of the libvirt/num_memory_encrypted_guests parameter is none. If you do not set a custom value, the AMD SEV Compute nodes do not impose a limit on the number of memory-encrypted instances that the nodes can host concurrently. Instead, the hardware determines the maximum number of memory-encrypted instances that the AMD SEV Compute nodes can host concurrently, which might cause some memory-encrypted instances to fail to launch. Note Q35 machine type is the default machine type and is required for SEV. To configure the kernel parameters for the AMD SEV Compute nodes, open the OpenStackDataPlaneNodeSet CR definition file for the node set you want to update, for example, my_data_plane_node_set.yaml . Add the required network configuration or modify the existing configuration. Place the configuration in the under ansibleVars : Create a new OpenStackDataPlaneDeployment CR to configure the services on the data plane nodes and deploy the data plane, and save it to a file named compute_amd_sev_deploy.yaml on your workstation: In the compute_amd_sev_deploy.yaml , specify nodeSets to include all the OpenStackDataPlaneNodeSet CRs you want to deploy. Ensure that you include the OpenStackDataPlaneNodeSet CR that you selected as a prerequisite. That OpenStackDataPlaneNodeSet CR defines the nodes you want to designate for memory encryption. Warning You can configure only whole node sets. Reconfiguring a subset of the nodes within a node set is not supported. If you need to reconfigure a subset of nodes within a node set, you must scale the node set down, and create a new node set from the previously removed nodes. Warning If your deployment has more than one node set, changes to the nova-extra-config.yaml ConfigMap might directly affect more than one node set, depending on how the NodeSets and the DataPlaneServices are configured. To check if a node set uses the nova-extra-config.yaml ConfigMap and therefore will be affected by the reconfiguration, complete the following steps: Check the services list of the node set and find the name of the DataPlaneService that points to nova. Ensure that the value of the edpmServiceType field of the DataPlaneService is set to nova . If the dataSources list of the DataPlaneService contains a configMapRef named nova-extra-config , then this node set uses this ConfigMap and therefore will be affected by the configuration changes in this ConfigMap. If some of the node sets that are affected should not be reconfigured, you must create a new DataPlaneService pointing to a separate ConfigMap for these node sets. Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment. Save the compute_amd_sev_deploy.yaml deployment file. Deploy the data plane: Verify that the data plane is deployed: Access the remote shell for openstackclient and verify that the deployed Compute nodes are visible on the control plane: 5.6.4. Creating an image for memory encryption When the data plane contains AMD SEV Compute nodes, you can create an AMD SEV instance image that your cloud users can use to launch instances that have memory encryption. Note To execute openstack client commands on the cloud you must specify the name of the cloud detailed in your clouds.yaml file. You can specify the name of the cloud by using one of the following methods: Use the --os-cloud option with each command: Use this option if you access more than one cloud. Create an environment variable for the cloud name in your bashrc file: Prerequisites The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. Procedure Create a new image for memory encryption: Note If you use an existing image, the image must have the hw_firmware_type property set to uefi . Add the property hw_mem_encryption=True to the image to enable AMD SEV memory encryption on the image: Tip You can enable memory encryption on the flavor. For more information, see Creating a flavor for memory encryption . 5.6.5. Creating a flavor for memory encryption When the data plane contains AMD SEV Compute nodes, you can create one or more AMD SEV flavors that your cloud users can use to launch instances that have memory encryption. Prerequisites You have the oc command line tool installed on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. Note An AMD SEV flavor is necessary only when the hw_mem_encryption property is not set on an image. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Create a flavor for memory encryption: Exit the openstackclient pod: 5.6.6. Launching an instance with memory encryption To verify that you can launch instances on an AMD SEV Compute node with memory encryption enabled, use a memory encryption flavor or image to create an instance. Note To execute openstack client commands on the cloud you must specify the name of the cloud detailed in your clouds.yaml file. You can specify the name of the cloud by using one of the following methods: Use the --os-cloud option with each command: Use this option if you access more than one cloud. Create an environment variable for the cloud name in your bashrc file: Prerequisites The administrator has created a project for you and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. Procedure Create an instance by using an AMD SEV flavor or image. The following example creates an instance by using the flavor created in Creating a flavor for memory encryption and the image created in Creating an image for memory encryption : Log in to the instance as a cloud user. To verify that the instance uses memory encryption, enter the following command from the instance: | [
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: my-data-plane-node-set spec: nodeTemplate: ansible: ansibleVars: edpm_bootstrap_swap_size_megabytes: 1024 edpm_bootstrap_swap_path: /swap edpm_bootstrap_swap_partition_enabled: false edpm_bootstrap_swap_partition_label: swap1",
"oc apply -f my_data_plane_node_set.yaml -n openstack",
"oc get openstackdataplanenodeset",
"NAME STATUS MESSAGE my-data-plane-node-set False Deployment not started",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: my-data-plane-deploy",
"spec: nodeSets: - my-data-plane-node-set",
"oc create -f my_data_plane_deploy.yaml -n openstack",
"oc get pod -l app=openstackansibleee -n openstack -w oc logs -l app=openstackansibleee -n openstack -f --max-log-requests 10",
"oc get openstackdataplanedeployment -n openstack",
"NAME STATUS MESSAGE my-data-plane-node-set True Setup Complete",
"oc get openstackdataplanenodeset -n openstack",
"NAME STATUS MESSAGE my-data-plane-node-set True NodeSet Ready",
"reserved_host_memory_mb = total_RAM - ( (vm_no * (avg_instance_size + overhead)) + (resource1 * resource_ram) + (resourceN * resource_ram))",
"apiVersion: v1 kind: ConfigMap metadata: name: nova-extra-config namespace: openstack data: 28-nova-huge-pages.conf: | 1 [default] reserved_huge_pages = node:0,size:2048,count:64 reserved_huge_pages = node:1,size:1GB,count:1 [libvirt] cpu_mode = custom cpu_models = Haswell-noTSX cpu_model_extra_flags = vmx, pdpe1gb, +pcid",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-edpm-huge-pages",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-edpm-huge-pages spec: nodeSets: - openstack-edpm - compute-huge-pages - - <nodeSet_name>",
"oc create -f compute_huge_pages_deploy.yaml",
"oc get openstackdataplanenodeset NAME STATUS MESSAGE compute-huge-pages True Deployed",
"oc rsh -n openstack openstackclient openstack hypervisor list",
"openstack flavor list --os-cloud <cloud_name>",
"`export OS_CLOUD=<cloud_name>`",
"openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <num_reserved_vcpus> huge_pages",
"openstack --os-compute-api=2.86 flavor set huge_pages --property hw:mem_page_size=<page_size>",
"openstack server create --flavor huge_pages --image <image> huge_pages_instance",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: my-data-plane-node-set spec: nodeTemplate: ansible: ansibleVars: edpm_default_mounts: | [ { \"name\": \"hugepages1G\", \"path\": \"/dev/hugepages1G\", \"opts\": \"pagesize=1G\", \"fstype\": \"hugetlbfs\", \"group\": \"hugetlbfs\" }, { \"name\": \"hugepages2M\", \"path\": \"/dev/hugepages2M\", \"opts\": \"pagesize=2M\", \"fstype\": \"hugetlbfs\", \"group\": \"hugetlbfs\" } ]",
"oc apply -f my_data_plane_node_set.yaml",
"oc get openstackdataplanenodeset",
"NAME STATUS MESSAGE my-data-plane-node-set False Deployment not started",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: my-data-plane-deploy",
"spec: nodeSets: - my-data-plane-node-set",
"oc create -f my_data_plane_deploy.yaml -n openstack",
"oc get pod -l app=openstackansibleee -n openstack -w oc logs -l app=openstackansibleee -n openstack -f --max-log-requests 10",
"oc get openstackdataplanedeployment -n openstack",
"NAME STATUS MESSAGE my-data-plane-node-set True Setup Complete",
"oc get openstackdataplanenodeset -n openstack",
"NAME STATUS MESSAGE my-data-plane-node-set True NodeSet Ready",
"apiVersion: v1 kind: ConfigMap metadata: name: nova-extra-config namespace: openstack data: 30-nova-file-backed-memory.conf: | [libvirt] file_backed_memory = 1048576",
"[libvirt] file_backed_memory = 1048576 memory_backing_dir = <new_directory_location>",
"openstack baremetal node set kind: OpenStackDataPlaneDeployment metadata: name: compute-file-backed-memory",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: compute-file-backed-memory spec: nodeSets: - openstack-edpm - compute-file-backed-memory - - <nodeSet_name>",
"oc create -f compute_file_backed_memory_deploy.yaml",
"oc get openstackdataplanenodeset NAME STATUS MESSAGE compute-file-backed-memory True Deployed",
"oc rsh -n openstack openstackclient openstack hypervisor list",
"mkfs.ext4 /dev/sdb",
"mount /dev/sdb /var/lib/libvirt/qemu/ram",
"lscpu | grep sev",
"apiVersion: v1 kind: ConfigMap metadata: name: nova-extra-config namespace: openstack data: 30-nova-amd-sev.conf: | [libvirt] num_memory_encrypted_guests = 15",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: my-data-plane-node-set spec: nodeTemplate: ansible: ansibleVars: edpm_kernel_args: \"default_hugepagesz=1GB hugepagesz=1G hugepages=64 iommu=pt intel_iommu=on tsx=off isolcpus=2-11,14-23\"",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-edpm-amd-sev",
"apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-edpm-amd-sev spec: nodeSets: - openstack-edpm - compute-amd-sev - my-data-plane-node-set - - <nodeSet_name>",
"oc create -f compute_amd_sev_deploy.yaml",
"oc get openstackdataplanenodeset NAME STATUS MESSAGE openstack-edpm True Deployed",
"oc rsh -n openstack openstackclient openstack hypervisor list",
"openstack flavor list --os-cloud <cloud_name>",
"`export OS_CLOUD=<cloud_name>`",
"openstack image create ... --property hw_firmware_type=uefi amd-sev-image",
"openstack image set --property hw_mem_encryption=True amd-sev-image",
"oc rsh -n openstack openstackclient",
"openstack flavor create --vcpus 1 --ram 512 --disk 2 --property hw:mem_encryption=True m1.small-amd-sev",
"exit",
"openstack flavor list --os-cloud <cloud_name>",
"`export OS_CLOUD=<cloud_name>`",
"openstack server create --flavor m1.small-amd-sev --image amd-sev-image amd-sev-instance",
"dmesg | grep -i sev AMD Secure Encrypted Virtualization (SEV) active"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-memory-on-compute-nodes |
14.5.13. Displaying the Block Devices Associated with a Domain | 14.5.13. Displaying the Block Devices Associated with a Domain The domblklist domain --inactive --details displays a table of all block devices that are associated with the specified domain. If --inactive is specified, the result will show the devices that are to be used at the boot and will not show those that are currently running in use by the running domain. If --details is specified, the disk type and device value will be included in the table. The information displayed in this table can be used with the domblkinfo and snapshot-create . | [
"domblklist rhel6 --details"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-domain_commands-displaying_the_block_devices_associated_with_a_domain |
Chapter 2. Installing Apicurio Registry on OpenShift | Chapter 2. Installing Apicurio Registry on OpenShift This chapter explains how to install Apicurio Registry on OpenShift Container Platform: Section 2.1, "Installing Apicurio Registry from the OpenShift OperatorHub" Prerequisites Read the introduction in the Red Hat build of Apicurio Registry User Guide . 2.1. Installing Apicurio Registry from the OpenShift OperatorHub You can install the Apicurio Registry Operator on your OpenShift cluster from the OperatorHub. The OperatorHub is available from the OpenShift Container Platform web console and provides an interface for cluster administrators to discover and install Operators. For more details, see Understanding OperatorHub . Note You can install more than one instance of Apicurio Registry depending on your environment. The number of instances depends on the number and type of artifacts stored in Apicurio Registry and on your chosen storage option. Prerequisites You must have cluster administrator access to an OpenShift cluster. Procedure In the OpenShift Container Platform web console, log in using an account with cluster administrator privileges. Create a new OpenShift project: In the left navigation menu, click Home , Project , and then Create Project . Enter a project name, for example, my-project , and click Create . In the left navigation menu, click Operators and then OperatorHub . In the Filter by keyword text box, enter registry to find the Red Hat Integration - Service Registry Operator . Read the information about the Operator, and click Install to display the Operator subscription page. Select your subscription settings, for example: Update Channel : Select one of the following: 2.x : Includes all minor and patch updates, such as 2.3.0 and 2.0.3. For example, an installation on 2.0.x will upgrade to 2.3.x. 2.0.x : Includes patch updates only, such as 2.0.1 and 2.0.2. For example, an installation on 2.0.x will ignore 2.3.x. Installation Mode : Select one of the following: All namespaces on the cluster (default) A specific namespace on the cluster and then my-project Approval Strategy : Select Automatic or Manual Click Install , and wait a few moments until the Operator is ready for use. Additional resources Adding Operators to an OpenShift cluster Apicurio Registry Operator community in GitHub | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apicurio_registry/2.6/html/installing_and_deploying_apicurio_registry_on_openshift/installing-registry-ocp_registry |
Chapter 4. ConsoleLink [console.openshift.io/v1] | Chapter 4. ConsoleLink [console.openshift.io/v1] Description ConsoleLink is an extension for customizing OpenShift web console links. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleLinkSpec is the desired console link configuration. 4.1.1. .spec Description ConsoleLinkSpec is the desired console link configuration. Type object Required href location text Property Type Description applicationMenu object applicationMenu holds information about section and icon used for the link in the application menu, and it is applicable only when location is set to ApplicationMenu. href string href is the absolute secure URL for the link (must use https) location string location determines which location in the console the link will be appended to (ApplicationMenu, HelpMenu, UserMenu, NamespaceDashboard). namespaceDashboard object namespaceDashboard holds information about namespaces in which the dashboard link should appear, and it is applicable only when location is set to NamespaceDashboard. If not specified, the link will appear in all namespaces. text string text is the display text for the link 4.1.2. .spec.applicationMenu Description applicationMenu holds information about section and icon used for the link in the application menu, and it is applicable only when location is set to ApplicationMenu. Type object Required section Property Type Description imageURL string imageUrl is the URL for the icon used in front of the link in the application menu. The URL must be an HTTPS URL or a Data URI. The image should be square and will be shown at 24x24 pixels. section string section is the section of the application menu in which the link should appear. This can be any text that will appear as a subheading in the application menu dropdown. A new section will be created if the text does not match text of an existing section. 4.1.3. .spec.namespaceDashboard Description namespaceDashboard holds information about namespaces in which the dashboard link should appear, and it is applicable only when location is set to NamespaceDashboard. If not specified, the link will appear in all namespaces. Type object Property Type Description namespaceSelector object namespaceSelector is used to select the Namespaces that should contain dashboard link by label. If the namespace labels match, dashboard link will be shown for the namespaces. namespaces array (string) namespaces is an array of namespace names in which the dashboard link should appear. 4.1.4. .spec.namespaceDashboard.namespaceSelector Description namespaceSelector is used to select the Namespaces that should contain dashboard link by label. If the namespace labels match, dashboard link will be shown for the namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 4.1.5. .spec.namespaceDashboard.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 4.1.6. .spec.namespaceDashboard.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 4.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consolelinks DELETE : delete collection of ConsoleLink GET : list objects of kind ConsoleLink POST : create a ConsoleLink /apis/console.openshift.io/v1/consolelinks/{name} DELETE : delete a ConsoleLink GET : read the specified ConsoleLink PATCH : partially update the specified ConsoleLink PUT : replace the specified ConsoleLink /apis/console.openshift.io/v1/consolelinks/{name}/status GET : read status of the specified ConsoleLink PATCH : partially update status of the specified ConsoleLink PUT : replace status of the specified ConsoleLink 4.2.1. /apis/console.openshift.io/v1/consolelinks Table 4.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ConsoleLink Table 4.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleLink Table 4.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleLinkList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleLink Table 4.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.7. Body parameters Parameter Type Description body ConsoleLink schema Table 4.8. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 201 - Created ConsoleLink schema 202 - Accepted ConsoleLink schema 401 - Unauthorized Empty 4.2.2. /apis/console.openshift.io/v1/consolelinks/{name} Table 4.9. Global path parameters Parameter Type Description name string name of the ConsoleLink Table 4.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ConsoleLink Table 4.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 4.12. Body parameters Parameter Type Description body DeleteOptions schema Table 4.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleLink Table 4.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 4.15. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleLink Table 4.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.17. Body parameters Parameter Type Description body Patch schema Table 4.18. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleLink Table 4.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.20. Body parameters Parameter Type Description body ConsoleLink schema Table 4.21. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 201 - Created ConsoleLink schema 401 - Unauthorized Empty 4.2.3. /apis/console.openshift.io/v1/consolelinks/{name}/status Table 4.22. Global path parameters Parameter Type Description name string name of the ConsoleLink Table 4.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ConsoleLink Table 4.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 4.25. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ConsoleLink Table 4.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.27. Body parameters Parameter Type Description body Patch schema Table 4.28. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ConsoleLink Table 4.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.30. Body parameters Parameter Type Description body ConsoleLink schema Table 4.31. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 201 - Created ConsoleLink schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/console_apis/consolelink-console-openshift-io-v1 |
Chapter 10. Network configuration | Chapter 10. Network configuration The following sections describe the basics of network configuration with the Assisted Installer. 10.1. Cluster networking There are various network types and addresses used by OpenShift and listed in the following table. Important IPv6 is not currently supported in the following configurations: Single stack Primary within dual stack Type DNS Description clusterNetwork The IP address pools from which pod IP addresses are allocated. serviceNetwork The IP address pool for services. machineNetwork The IP address blocks for machines forming the cluster. apiVIP api.<clustername.clusterdomain> The VIP to use for API communication. You must provide this setting or preconfigure the address in the DNS so that the default name resolves correctly. If you are deploying with dual-stack networking, this must be the IPv4 address. apiVIPs api.<clustername.clusterdomain> The VIPs to use for API communication. You must provide this setting or preconfigure the address in the DNS so that the default name resolves correctly. If using dual stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the apiVIP setting. ingressVIP *.apps.<clustername.clusterdomain> The VIP to use for ingress traffic. If you are deploying with dual-stack networking, this must be the IPv4 address. ingressVIPs *.apps.<clustername.clusterdomain> The VIPs to use for ingress traffic. If you are deploying with dual-stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the ingressVIP setting. Note OpenShift Container Platform 4.12 introduces the new apiVIPs and ingressVIPs settings to accept many IP addresses for dual-stack networking. When using dual-stack networking, the first IP address must be the IPv4 address and the second IP address must be the IPv6 address. The new settings will replace apiVIP and IngressVIP , but you must set both the new and old settings when modifying the configuration by using the API. Currently, the Assisted Service can deploy OpenShift Container Platform clusters by using one of the following configurations: IPv4 Dual-stack (IPv4 + IPv6 with IPv4 as primary) Note OVN is the default Container Network Interface (CNI) in OpenShift Container Platform 4.12 and later releases. SDN is supported up to OpenShift Container Platform 4.14, but not for OpenShift Container Platform 4.15 and later releases. 10.1.1. Limitations 10.1.1.1. SDN The SDN controller is not supported with single-node OpenShift. The SDN controller does not support dual-stack networking. The SDN controller is not supported for OpenShift Container Platform 4.15 and later releases. For more information, see Deprecation of the OpenShift SDN network plugin in the OpenShift Container Platform release notes. 10.1.1.2. OVN-Kubernetes For more information, see About the OVN-Kubernetes network plugin . 10.1.2. Cluster network The cluster network is a network from which every pod deployed in the cluster gets its IP address. Given that the workload might live across many nodes forming the cluster, it is important for the network provider to be able to easily find an individual node based on the pod's IP address. To do this, clusterNetwork.cidr is further split into subnets of the size defined in clusterNetwork.hostPrefix . The host prefix specifies a length of the subnet assigned to each individual node in the cluster. An example of how a cluster might assign addresses for the multi-node cluster: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Creating a 3-node cluster by using this snippet might create the following network topology: Pods scheduled in node #1 get IPs from 10.128.0.0/23 Pods scheduled in node #2 get IPs from 10.128.2.0/23 Pods scheduled in node #3 get IPs from 10.128.4.0/23 Explaining OVN-Kubernetes internals is out of scope for this document, but the pattern previously described provides a way to route Pod-to-Pod traffic between different nodes without keeping a big list of mapping between Pods and their corresponding nodes. 10.1.3. Machine network The machine network is a network used by all the hosts forming the cluster to communicate with each other. This is also the subnet that must include the API and Ingress VIPs. For iSCSI boot volumes, the hosts are connected over two machine networks: one designated for the OpenShift Container Platform installation and the other for iSCSI traffic. During the installation process, ensure that you specify the OpenShift Container Platform network. Using the iSCSI network will result in an error for the host. 10.1.4. Single-node OpenShift compared to multi-node cluster Depending on whether you are deploying single-node OpenShift or a multi-node cluster, different values are mandatory. The following table explains this in more detail. Parameter Single-node OpenShift Multi-node cluster with DHCP mode Multi-node cluster without DHCP mode clusterNetwork Required Required Required serviceNetwork Required Required Required machineNetwork Auto-assign possible (*) Auto-assign possible (*) Auto-assign possible (*) apiVIP Forbidden Forbidden Required apiVIPs Forbidden Forbidden Required in 4.12 and later releases ingressVIP Forbidden Forbidden Required ingressVIPs Forbidden Forbidden Required in 4.12 and later releases (*) Auto assignment of the machine network CIDR happens if there is only a single host network. Otherwise you need to specify it explicitly. 10.1.5. Air-gapped environments The workflow for deploying a cluster without Internet access has some prerequisites, which are out of scope of this document. You can consult the Zero Touch Provisioning the hard way Git repository for some insights. 10.2. VIP DHCP allocation The VIP DHCP allocation is a feature allowing users to skip the requirement of manually providing virtual IPs for API and Ingress by leveraging the ability of a service to automatically assign those IP addresses from the DHCP server. If you enable the feature, instead of using api_vips and ingress_vips from the cluster configuration, the service will send a lease allocation request and based on the reply it will use VIPs accordingly. The service will allocate the IP addresses from the Machine Network. Please note this is not an OpenShift Container Platform feature and it has been implemented in the Assisted Service to make the configuration easier. Important VIP DHCP allocation is currently limited to the OpenShift Container Platform SDN network type. SDN is not supported from OpenShift Container Platform version 4.15 and later. Therefore, support for VIP DHCP allocation is also ending from OpenShift Container Platform 4.15 and later. 10.2.1. Example payload to enable autoallocation { "vip_dhcp_allocation": true, "network_type": "OVNKubernetes", "user_managed_networking": false, "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 } ], "service_networks": [ { "cidr": "172.30.0.0/16" } ], "machine_networks": [ { "cidr": "192.168.127.0/24" } ] } 10.2.2. Example payload to disable autoallocation { "api_vips": [ { "ip": "192.168.127.100" } ], "ingress_vips": [ { "ip": "192.168.127.101" } ], "vip_dhcp_allocation": false, "network_type": "OVNKubernetes", "user_managed_networking": false, "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 } ], "service_networks": [ { "cidr": "172.30.0.0/16" } ] } 10.3. Additional resources Bare metal IPI documentation provides additional explanation of the syntax for the VIP addresses. 10.4. Understanding differences between user- and cluster-managed networking User managed networking is a feature in the Assisted Installer that allows customers with non-standard network topologies to deploy OpenShift Container Platform clusters. Examples include: Customers with an external load balancer who do not want to use keepalived and VRRP for handling VIP addressses. Deployments with cluster nodes distributed across many distinct L2 network segments. 10.4.1. Validations There are various network validations happening in the Assisted Installer before it allows the installation to start. When you enable User Managed Networking, the following validations change: The L3 connectivity check (ICMP) is performed instead of the L2 check (ARP). The MTU validation verifies the maximum transmission unit (MTU) value for all interfaces and not only for the machine network. 10.5. Static network configuration You may use static network configurations when generating or updating the discovery ISO. 10.5.1. Prerequisites You are familiar with NMState . 10.5.2. NMState configuration The NMState file in YAML format specifies the desired network configuration for the host. It has the logical names of the interfaces that will be replaced with the actual name of the interface at discovery time. 10.5.2.1. Example of NMState configuration dns-resolver: config: server: - 192.168.126.1 interfaces: - ipv4: address: - ip: 192.168.126.30 prefix-length: 24 dhcp: false enabled: true name: eth0 state: up type: ethernet - ipv4: address: - ip: 192.168.141.30 prefix-length: 24 dhcp: false enabled: true name: eth1 state: up type: ethernet routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.126.1 -hop-interface: eth0 table-id: 254 10.5.3. MAC interface mapping MAC interface map is an attribute that maps logical interfaces defined in the NMState configuration with the actual interfaces present on the host. The mapping should always use physical interfaces present on the host. For example, when the NMState configuration defines a bond or VLAN, the mapping should only contain an entry for parent interfaces. 10.5.3.1. Example of MAC interface mapping mac_interface_map: [ { mac_address: 02:00:00:2c:23:a5, logical_nic_name: eth0 }, { mac_address: 02:00:00:68:73:dc, logical_nic_name: eth1 } ] 10.5.4. Additional NMState configuration examples The following examples are only meant to show a partial configuration. They are not meant for use as-is, and you should always adjust to the environment where they will be used. If used incorrectly, they can leave your machines with no network connectivity. 10.5.4.1. Tagged VLAN interfaces: - ipv4: address: - ip: 192.168.143.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false name: eth0.404 state: up type: vlan vlan: base-iface: eth0 id: 404 reorder-headers: true 10.5.4.2. Network bond interfaces: - ipv4: address: - ip: 192.168.138.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false link-aggregation: mode: active-backup options: miimon: "140" port: - eth0 - eth1 name: bond0 state: up type: bond 10.6. Applying a static network configuration with the API You can apply a static network configuration by using the Assisted Installer API. Important A static IP configuration is not supported in the following scenarios: OpenShift Container Platform installations on Oracle Cloud Infrastructure. OpenShift Container Platform installations on iSCSI boot volumes. Prerequisites You have created an infrastructure environment using the API or have created a cluster using the web console. You have your infrastructure environment ID exported in your shell as USDINFRA_ENV_ID . You have credentials to use when accessing the API and have exported a token as USDAPI_TOKEN in your shell. You have YAML files with a static network configuration available as server-a.yaml and server-b.yaml . Procedure Create a temporary file /tmp/request-body.txt with the API request: jq -n --arg NMSTATE_YAML1 "USD(cat server-a.yaml)" --arg NMSTATE_YAML2 "USD(cat server-b.yaml)" \ '{ "static_network_config": [ { "network_yaml": USDNMSTATE_YAML1, "mac_interface_map": [{"mac_address": "02:00:00:2c:23:a5", "logical_nic_name": "eth0"}, {"mac_address": "02:00:00:68:73:dc", "logical_nic_name": "eth1"}] }, { "network_yaml": USDNMSTATE_YAML2, "mac_interface_map": [{"mac_address": "02:00:00:9f:85:eb", "logical_nic_name": "eth1"}, {"mac_address": "02:00:00:c8:be:9b", "logical_nic_name": "eth0"}] } ] }' >> /tmp/request-body.txt Refresh the API token: USD source refresh-token Send the request to the Assisted Service API endpoint: USD curl -H "Content-Type: application/json" \ -X PATCH -d @/tmp/request-body.txt \ -H "Authorization: Bearer USD{API_TOKEN}" \ https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID 10.7. Additional resources Applying a static network configuration with the web console 10.8. Converting to dual-stack networking Dual-stack IPv4/IPv6 configuration allows deployment of a cluster with pods residing in both IPv4 and IPv6 subnets. 10.8.1. Prerequisites You are familiar with OVN-K8s documentation 10.8.2. Example payload for single-node OpenShift { "network_type": "OVNKubernetes", "user_managed_networking": false, "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 }, { "cidr": "fd01::/48", "host_prefix": 64 } ], "service_networks": [ {"cidr": "172.30.0.0/16"}, {"cidr": "fd02::/112"} ], "machine_networks": [ {"cidr": "192.168.127.0/24"},{"cidr": "1001:db8::/120"} ] } 10.8.3. Example payload for an OpenShift Container Platform cluster consisting of many nodes { "vip_dhcp_allocation": false, "network_type": "OVNKubernetes", "user_managed_networking": false, "api_vips": [ { "ip": "192.168.127.100" }, { "ip": "2001:0db8:85a3:0000:0000:8a2e:0370:7334" } ], "ingress_vips": [ { "ip": "192.168.127.101" }, { "ip": "2001:0db8:85a3:0000:0000:8a2e:0370:7335" } ], "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 }, { "cidr": "fd01::/48", "host_prefix": 64 } ], "service_networks": [ {"cidr": "172.30.0.0/16"}, {"cidr": "fd02::/112"} ], "machine_networks": [ {"cidr": "192.168.127.0/24"},{"cidr": "1001:db8::/120"} ] } 10.8.4. Limitations The api_vips IP address and ingress_vips IP address settings must be of the primary IP address family when using dual-stack networking, which must be IPv4 addresses. Currently, Red Hat does not support dual-stack VIPs or dual-stack networking with IPv6 as the primary IP address family. Red Hat supports dual-stack networking with IPv4 as the primary IP address family and IPv6 as the secondary IP address family. Therefore, you must place the IPv4 entries before the IPv6 entries when entering the IP address values. 10.9. Additional resources Understanding OpenShift networking About the OpenShift SDN network plugin OVN-Kubernetes - CNI network provider Dual-stack Service configuration scenarios Installing a user-provisioned bare metal cluster with network customizations . Cluster Network Operator configuration object | [
"clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"{ \"vip_dhcp_allocation\": true, \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 } ], \"service_networks\": [ { \"cidr\": \"172.30.0.0/16\" } ], \"machine_networks\": [ { \"cidr\": \"192.168.127.0/24\" } ] }",
"{ \"api_vips\": [ { \"ip\": \"192.168.127.100\" } ], \"ingress_vips\": [ { \"ip\": \"192.168.127.101\" } ], \"vip_dhcp_allocation\": false, \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 } ], \"service_networks\": [ { \"cidr\": \"172.30.0.0/16\" } ] }",
"dns-resolver: config: server: - 192.168.126.1 interfaces: - ipv4: address: - ip: 192.168.126.30 prefix-length: 24 dhcp: false enabled: true name: eth0 state: up type: ethernet - ipv4: address: - ip: 192.168.141.30 prefix-length: 24 dhcp: false enabled: true name: eth1 state: up type: ethernet routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.126.1 next-hop-interface: eth0 table-id: 254",
"mac_interface_map: [ { mac_address: 02:00:00:2c:23:a5, logical_nic_name: eth0 }, { mac_address: 02:00:00:68:73:dc, logical_nic_name: eth1 } ]",
"interfaces: - ipv4: address: - ip: 192.168.143.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false name: eth0.404 state: up type: vlan vlan: base-iface: eth0 id: 404 reorder-headers: true",
"interfaces: - ipv4: address: - ip: 192.168.138.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false link-aggregation: mode: active-backup options: miimon: \"140\" port: - eth0 - eth1 name: bond0 state: up type: bond",
"jq -n --arg NMSTATE_YAML1 \"USD(cat server-a.yaml)\" --arg NMSTATE_YAML2 \"USD(cat server-b.yaml)\" '{ \"static_network_config\": [ { \"network_yaml\": USDNMSTATE_YAML1, \"mac_interface_map\": [{\"mac_address\": \"02:00:00:2c:23:a5\", \"logical_nic_name\": \"eth0\"}, {\"mac_address\": \"02:00:00:68:73:dc\", \"logical_nic_name\": \"eth1\"}] }, { \"network_yaml\": USDNMSTATE_YAML2, \"mac_interface_map\": [{\"mac_address\": \"02:00:00:9f:85:eb\", \"logical_nic_name\": \"eth1\"}, {\"mac_address\": \"02:00:00:c8:be:9b\", \"logical_nic_name\": \"eth0\"}] } ] }' >> /tmp/request-body.txt",
"source refresh-token",
"curl -H \"Content-Type: application/json\" -X PATCH -d @/tmp/request-body.txt -H \"Authorization: Bearer USD{API_TOKEN}\" https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID",
"{ \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 }, { \"cidr\": \"fd01::/48\", \"host_prefix\": 64 } ], \"service_networks\": [ {\"cidr\": \"172.30.0.0/16\"}, {\"cidr\": \"fd02::/112\"} ], \"machine_networks\": [ {\"cidr\": \"192.168.127.0/24\"},{\"cidr\": \"1001:db8::/120\"} ] }",
"{ \"vip_dhcp_allocation\": false, \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"api_vips\": [ { \"ip\": \"192.168.127.100\" }, { \"ip\": \"2001:0db8:85a3:0000:0000:8a2e:0370:7334\" } ], \"ingress_vips\": [ { \"ip\": \"192.168.127.101\" }, { \"ip\": \"2001:0db8:85a3:0000:0000:8a2e:0370:7335\" } ], \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 }, { \"cidr\": \"fd01::/48\", \"host_prefix\": 64 } ], \"service_networks\": [ {\"cidr\": \"172.30.0.0/16\"}, {\"cidr\": \"fd02::/112\"} ], \"machine_networks\": [ {\"cidr\": \"192.168.127.0/24\"},{\"cidr\": \"1001:db8::/120\"} ] }"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_openshift_container_platform_with_the_assisted_installer/assembly_network-configuration |
8.6. Suspending an XFS File System | 8.6. Suspending an XFS File System To suspend or resume write activity to a file system, use xfs_freeze . Suspending write activity allows hardware-based device snapshots to be used to capture the file system in a consistent state. Note The xfs_freeze utility is provided by the xfsprogs package, which is only available on x86_64. To suspend (that is, freeze) an XFS file system, use: To unfreeze an XFS file system, use: When taking an LVM snapshot, it is not necessary to use xfs_freeze to suspend the file system first. Rather, the LVM management tools will automatically suspend the XFS file system before taking the snapshot. Note The xfs_freeze utility can also be used to freeze or unfreeze an ext3, ext4, GFS2, XFS, and BTRFS, file system. The syntax for doing so is the same. For more information about freezing and unfreezing an XFS file system, refer to man xfs_freeze . | [
"xfs_freeze -f /mount/point",
"xfs_freeze -u /mount/point"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/xfsfreeze |
probe::tty.write | probe::tty.write Name probe::tty.write - write to the tty line Synopsis Values driver_name the driver name buffer the buffer that will be written file_name the file name lreated to the tty nr The amount of characters | [
"tty.write"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-tty-write |
Chapter 3. Installing a cluster on OpenStack with customizations | Chapter 3. Installing a cluster on OpenStack with customizations In OpenShift Container Platform version 4.18, you can install a customized cluster on Red Hat OpenStack Platform (RHOSP). To customize the installation, modify parameters in the install-config.yaml before you install the cluster. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.18 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have a storage service installed in RHOSP, such as block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OpenShift Container Platform registry cluster deployment. For more information, see Optimizing storage . You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . You have the metadata service enabled in RHOSP. 3.2. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 3.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 3.2.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 3.2.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 3.2.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 3.2.4. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you can provision your own API and application ingress load balancing infrastructure to use in place of the default, internal load balancing solution. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 3.2. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 3.3. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 3.2.4.1. Example load balancer configuration for clusters that are deployed with user-managed load balancers This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for clusters that are deployed with user-managed load balancers. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 3.1. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 3.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.18, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.4. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Important RHOSP 17 sets the rgw_max_attr_size parameter of Ceph RGW to 256 characters. This setting causes issues with uploading container images to the OpenShift Container Platform registry. You must set the value of rgw_max_attr_size to at least 1024 characters. Before installation, check if your RHOSP deployment is affected by this problem. If it is, reconfigure Ceph RGW. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 3.5. Configuring an image registry with custom storage on clusters that run on RHOSP After you install a cluster on Red Hat OpenStack Platform (RHOSP), you can use a Cinder volume that is in a specific availability zone for registry storage. Procedure Create a YAML file that specifies the storage class and availability zone to use. For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name> Note OpenShift Container Platform does not verify the existence of the availability zone you choose. Verify the name of the availability zone before you apply the configuration. From a command line, apply the configuration: USD oc apply -f <storage_class_file_name> Example output storageclass.storage.k8s.io/custom-csi-storageclass created Create a YAML file that specifies a persistent volume claim (PVC) that uses your storage class and the openshift-image-registry namespace. For example: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: "true" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3 1 Enter the namespace openshift-image-registry . This namespace allows the Cluster Image Registry Operator to consume the PVC. 2 Optional: Adjust the volume size. 3 Enter the name of the storage class that you created. From a command line, apply the configuration: USD oc apply -f <pvc_file_name> Example output persistentvolumeclaim/csi-pvc-imageregistry created Replace the original persistent volume claim in the image registry configuration with the new claim: USD oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{"op": "replace", "path": "/spec/storage/pvc/claim", "value": "csi-pvc-imageregistry"}]' Example output config.imageregistry.operator.openshift.io/cluster patched Over the several minutes, the configuration is updated. Verification To confirm that the registry is using the resources that you defined: Verify that the PVC claim value is identical to the name that you provided in your PVC definition: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output ... status: ... managementState: Managed pvc: claim: csi-pvc-imageregistry ... Verify that the status of the PVC is Bound : USD oc get pvc -n openshift-image-registry csi-pvc-imageregistry Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m 3.6. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Important If the external network's CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the install-config.yaml file before you start the installation process. The default network ranges are: Network Range machineNetwork 10.0.0.0/16 serviceNetwork 172.30.0.0/16 clusterNetwork 10.128.0.0/14 Warning If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in RHOSP. Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 3.7. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 3.8. Setting OpenStack Cloud Controller Manager options Optionally, you can edit the OpenStack Cloud Controller Manager (CCM) configuration for your cluster. This configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of configuration parameters, see the "OpenStack Cloud Controller Manager reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options according to the CCM reference guide. Configuring Octavia for load balancing is a common case. For example: #... [LoadBalancer] lb-provider = "amphora" 1 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #... 1 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 2 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. 3 This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.2, this feature is only available for the Amphora provider. 4 This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 5 This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 6 This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True . Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local . The OVN Octavia provider in RHOSP 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn" . Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. 3.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.10. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for OpenStack 3.10.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.10.2. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIPs and platform.openstack.ingressVIPs that are outside of the DHCP allocation pool. Important The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace. 3.10.3. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the install-config.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . If your cluster runs on an RHOSP version that is more than 16.1.6 and less than 16.2.4, bare metal workers do not function due to a known issue that causes the metadata service to be unavailable for services on OpenShift Container Platform nodes. The RHOSP network supports both VM and bare metal server attachment. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an install-config.yaml file as part of the OpenShift Container Platform installation process. Procedure In the install-config.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of controlPlane.platform.openstack.type to a bare metal flavor. Change the value of compute.platform.openstack.type to a bare metal flavor. If you want to deploy your machines on a pre-existing network, change the value of platform.openstack.machinesSubnet to the RHOSP subnet UUID of the network. Control plane and compute machines must use the same subnet. An example bare metal install-config.yaml file controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 ... compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 ... platform: openstack: machinesSubnet: <subnet_UUID> 3 ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. 3 If you want to use a pre-existing network, change this value to the UUID of the RHOSP subnet. Use the updated install-config.yaml file to complete the installation process. The compute machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug 3.10.4. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 3.10.4.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 3.10.4.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIPs property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIPs property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIPs and platform.openstack.ingressVIPs properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 1 2 In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 3.10.5. Sample customized install-config.yaml file for RHOSP The following example install-config.yaml files demonstrate all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. Example 3.2. Example single stack install-config.yaml file apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... Example 3.3. Example dual stack install-config.yaml file apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.25.0/24 - cidr: fd2e:6f44:5dd8:c956::/64 serviceNetwork: - 172.30.0.0/16 - fd02::/112 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiVIPs: - 192.168.25.10 - fd2e:6f44:5dd8:c956:f816:3eff:fec3:5955 ingressVIPs: - 192.168.25.132 - fd2e:6f44:5dd8:c956:f816:3eff:fe40:aecb controlPlanePort: fixedIPs: - subnet: name: openshift-dual4 - subnet: name: openshift-dual6 network: name: openshift-dual fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 3.10.6. Configuring a cluster with dual-stack networking You can create a dual-stack cluster on RHOSP. However, the dual-stack configuration is enabled only if you are using an RHOSP network with IPv4 and IPv6 subnets. Note RHOSP does not support the conversion of an IPv4 single-stack cluster to a dual-stack cluster network. 3.10.6.1. Deploying the dual-stack cluster Procedure Create a network with IPv4 and IPv6 subnets. The available address modes for the ipv6-ra-mode and ipv6-address-mode fields are: dhcpv6-stateful , dhcpv6-stateless , and slaac . Note The dualstack network MTU must accommodate both the minimum MTU for IPv6, which is 1280, and the OVN-Kubernetes encapsulation overhead, which is 100. Note DHCP must be enabled on the subnets. Create the API and Ingress VIPs ports. Add the IPv6 subnet to the router to enable router advertisements. If you are using a provider network, you can enable router advertisements by adding the network as an external gateway, which also enables external connectivity. To configure IPv4 and IPv6 address endpoints for cluster nodes, edit the install-config.yaml file. The following is an example of an install-config.yaml file: Example install-config.yaml apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 1 - cidr: "192.168.25.0/24" - cidr: "fd2e:6f44:5dd8:c956::/64" clusterNetwork: 2 - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 serviceNetwork: 3 - 172.30.0.0/16 - fd02::/112 platform: openstack: ingressVIPs: ['192.168.25.79', 'fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad'] 4 apiVIPs: ['192.168.25.199', 'fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36'] 5 controlPlanePort: 6 fixedIPs: 7 - subnet: 8 name: subnet-v4 id: subnet-v4-id - subnet: 9 name: subnet-v6 id: subnet-v6-id network: 10 name: dualstack id: network-id 1 2 3 You must specify an IP address range for both the IPv4 and IPv6 address families. 4 Specify the virtual IP (VIP) address endpoints for the Ingress VIP services to provide an interface to the cluster. 5 Specify the virtual IP (VIP) address endpoints for the API VIP services to provide an interface to the cluster. 6 Specify the dual-stack network details that are used by all of the nodes across the cluster. 7 The CIDR of any subnet specified in this field must match the CIDRs listed on networks.machineNetwork . 8 9 You can specify a value for either name or id , or both. 10 Specifying the network under the ControlPlanePort field is optional. Alternatively, if you want an IPv6 primary dual-stack cluster, edit the install-config.yaml file following the example below: Example install-config.yaml apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 1 - cidr: "fd2e:6f44:5dd8:c956::/64" - cidr: "192.168.25.0/24" clusterNetwork: 2 - cidr: fd01::/48 hostPrefix: 64 - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: 3 - fd02::/112 - 172.30.0.0/16 platform: openstack: ingressVIPs: ['fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad', '192.168.25.79'] 4 apiVIPs: ['fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36', '192.168.25.199'] 5 controlPlanePort: 6 fixedIPs: 7 - subnet: 8 name: subnet-v6 id: subnet-v6-id - subnet: 9 name: subnet-v4 id: subnet-v4-id network: 10 name: dualstack id: network-id 1 2 3 You must specify an IP address range for both the IPv4 and IPv6 address families. 4 Specify the virtual IP (VIP) address endpoints for the Ingress VIP services to provide an interface to the cluster. 5 Specify the virtual IP (VIP) address endpoints for the API VIP services to provide an interface to the cluster. 6 Specify the dual-stack network details that are used by all the nodes across the cluster. 7 The CIDR of any subnet specified in this field must match the CIDRs listed on networks.machineNetwork . 8 9 You can specify a value for either name or id , or both. 10 Specifying the network under the ControlPlanePort field is optional. Note When using an installation host in an isolated dual-stack network, the IPv6 address may not be reassigned correctly upon reboot. To resolve this problem on Red Hat Enterprise Linux (RHEL) 8, create a file called /etc/NetworkManager/system-connections/required-rhel8-ipv6.conf that contains the following configuration: [connection] type=ethernet [ipv6] addr-gen-mode=eui64 method=auto To resolve this problem on RHEL 9, create a file called /etc/NetworkManager/conf.d/required-rhel9-ipv6.conf that contains the following configuration: [connection] ipv6.addr-gen-mode=0 After you create and edit the file, reboot the installation host. Note The ip=dhcp,dhcp6 kernel argument, which is set on all of the nodes, results in a single Network Manager connection profile that is activated on multiple interfaces simultaneously. Because of this behavior, any additional network has the same connection enforced with an identical UUID. If you need an interface-specific configuration, create a new connection profile for that interface so that the default connection is no longer enforced on it. 3.10.7. Configuring a cluster with single-stack IPv6 networking You can create a single-stack IPv6 cluster on Red Hat OpenStack Platform (RHOSP) after you configure your RHOSP deployment. Important You cannot convert a dual-stack cluster into a single-stack IPv6 cluster. Prerequisites Your RHOSP deployment has an existing network with a DHCPv6-stateful IPv6 subnet to use as the machine network. DNS is configured for the existing IPv6 subnet. The IPv6 subnet is added to a RHOSP router, and the router is configured to send router advertisements (RAs). You added any additional IPv6 subnets that are used in the cluster to an RHOSP router to enable router advertisements. Note Using an IPv6 SLAAC subnet is not supported because any dns_nameservers addresses are not enforced by RHOSP Neutron. You have a mirror registry with an IPv6 interface. The RHOSP network accepts a minimum MTU size of 1442 bytes. You created API and ingress virtual IP addresses (VIPs) as RHOSP ports on the machine network and included those addresses in the install-config.yaml file. Procedure Create the API VIP port on the network by running the following command: USD openstack port create api --network <v6_machine_network> Create the Ingress VIP port on the network by running the following command: USD openstack port create ingress --network <v6_machine_network> After the networking resources are pre-created, deploy a cluster by using an install-config.yaml file that reflects your IPv6 network configuration. As an example: apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: - cidr: "fd2e:6f44:5dd8:c956::/64" 1 clusterNetwork: - cidr: fd01::/48 hostPrefix: 64 serviceNetwork: - fd02::/112 platform: openstack: ingressVIPs: ['fd2e:6f44:5dd8:c956::383'] 2 apiVIPs: ['fd2e:6f44:5dd8:c956::9a'] 3 controlPlanePort: fixedIPs: 4 - subnet: name: subnet-v6 network: 5 name: v6-network imageContentSources: 6 - mirrors: - <mirror> source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - <mirror> source: registry.ci.openshift.org/ocp/release additionalTrustBundle: | <certificate_of_the_mirror> 1 The CIDR of the subnet specified in this field must match the CIDR of the subnet that is specified in the controlPlanePort section. 2 3 Use the address from the ports you generated in the steps as the values for the parameters platform.openstack.ingressVIPs and platform.openstack.apiVIPs . 4 5 Items under the platform.openstack.controlPlanePort.fixedIPs and platform.openstack.controlPlanePort.network keys can contain an ID, a name, or both. 6 The imageContentSources section contains the mirror details. For more information on configuring a local image registry, see "Creating a mirror registry with mirror registry for Red Hat OpenShift". Additional resources See Creating a mirror registry with mirror registry for Red Hat OpenShift 3.10.8. Installation configuration for a cluster on OpenStack with a user-managed load balancer The following example install-config.yaml file demonstrates how to configure a cluster that uses an external, user-managed load balancer rather than the default internal load balancer. apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.10.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a 1 apiVIPs: - 192.168.10.5 ingressVIPs: - 192.168.10.7 loadBalancer: type: UserManaged 2 1 Regardless of which load balancer you use, the load balancer is deployed to this subnet. 2 The UserManaged value indicates that you are using an user-managed load balancer. 3.11. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.12. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 3.12.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 3.12.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 3.13. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.14. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 3.15. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.18, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.17. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . | [
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"openstack role add --user <user> --project <project> swiftoperator",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name>",
"oc apply -f <storage_class_file_name>",
"storageclass.storage.k8s.io/custom-csi-storageclass created",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: \"true\" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3",
"oc apply -f <pvc_file_name>",
"persistentvolumeclaim/csi-pvc-imageregistry created",
"oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{\"op\": \"replace\", \"path\": \"/spec/storage/pvc/claim\", \"value\": \"csi-pvc-imageregistry\"}]'",
"config.imageregistry.operator.openshift.io/cluster patched",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"status: managementState: Managed pvc: claim: csi-pvc-imageregistry",
"oc get pvc -n openshift-image-registry csi-pvc-imageregistry",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"openshift-install --dir <destination_directory> create manifests",
"vi openshift/manifests/cloud-provider-config.yaml",
"# [LoadBalancer] lb-provider = \"amphora\" 1 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #",
"oc edit configmap -n openshift-config cloud-provider-config",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 platform: openstack: machinesSubnet: <subnet_UUID> 3",
"./openshift-install wait-for install-complete --log-level debug",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.25.0/24 - cidr: fd2e:6f44:5dd8:c956::/64 serviceNetwork: - 172.30.0.0/16 - fd02::/112 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiVIPs: - 192.168.25.10 - fd2e:6f44:5dd8:c956:f816:3eff:fec3:5955 ingressVIPs: - 192.168.25.132 - fd2e:6f44:5dd8:c956:f816:3eff:fe40:aecb controlPlanePort: fixedIPs: - subnet: name: openshift-dual4 - subnet: name: openshift-dual6 network: name: openshift-dual fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 1 - cidr: \"192.168.25.0/24\" - cidr: \"fd2e:6f44:5dd8:c956::/64\" clusterNetwork: 2 - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 serviceNetwork: 3 - 172.30.0.0/16 - fd02::/112 platform: openstack: ingressVIPs: ['192.168.25.79', 'fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad'] 4 apiVIPs: ['192.168.25.199', 'fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36'] 5 controlPlanePort: 6 fixedIPs: 7 - subnet: 8 name: subnet-v4 id: subnet-v4-id - subnet: 9 name: subnet-v6 id: subnet-v6-id network: 10 name: dualstack id: network-id",
"apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: 1 - cidr: \"fd2e:6f44:5dd8:c956::/64\" - cidr: \"192.168.25.0/24\" clusterNetwork: 2 - cidr: fd01::/48 hostPrefix: 64 - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: 3 - fd02::/112 - 172.30.0.0/16 platform: openstack: ingressVIPs: ['fd2e:6f44:5dd8:c956:f816:3eff:fef1:1bad', '192.168.25.79'] 4 apiVIPs: ['fd2e:6f44:5dd8:c956:f816:3eff:fe78:cf36', '192.168.25.199'] 5 controlPlanePort: 6 fixedIPs: 7 - subnet: 8 name: subnet-v6 id: subnet-v6-id - subnet: 9 name: subnet-v4 id: subnet-v4-id network: 10 name: dualstack id: network-id",
"[connection] type=ethernet [ipv6] addr-gen-mode=eui64 method=auto",
"[connection] ipv6.addr-gen-mode=0",
"openstack port create api --network <v6_machine_network>",
"openstack port create ingress --network <v6_machine_network>",
"apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: machineNetwork: - cidr: \"fd2e:6f44:5dd8:c956::/64\" 1 clusterNetwork: - cidr: fd01::/48 hostPrefix: 64 serviceNetwork: - fd02::/112 platform: openstack: ingressVIPs: ['fd2e:6f44:5dd8:c956::383'] 2 apiVIPs: ['fd2e:6f44:5dd8:c956::9a'] 3 controlPlanePort: fixedIPs: 4 - subnet: name: subnet-v6 network: 5 name: v6-network imageContentSources: 6 - mirrors: - <mirror> source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - <mirror> source: registry.ci.openshift.org/ocp/release additionalTrustBundle: | <certificate_of_the_mirror>",
"apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.10.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a 1 apiVIPs: - 192.168.10.5 ingressVIPs: - 192.168.10.7 loadBalancer: type: UserManaged 2",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_openstack/installing-openstack-installer-custom |
Chapter 249. Openshift Build Config Component | Chapter 249. Openshift Build Config Component Available as of Camel version 2.17 The OpenShift Build Config component is one of Kubernetes Components which provides a producer to execute kubernetes build config operations. 249.1. Component Options The Openshift Build Config component has no options. 249.2. Endpoint Options The Openshift Build Config endpoint is configured using URI syntax: with the following path and query parameters: 249.2.1. Path Parameters (1 parameters): Name Description Default Type masterUrl Required Kubernetes API server URL String 249.2.2. Query Parameters (20 parameters): Name Description Default Type apiVersion (producer) The Kubernetes API Version to use String dnsDomain (producer) The dns domain, used for ServiceCall EIP String kubernetesClient (producer) Default KubernetesClient to use if provided KubernetesClient operation (producer) Producer operation to do on Kubernetes String portName (producer) The port name, used for ServiceCall EIP String portProtocol (producer) The port protocol, used for ServiceCall EIP tcp String connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean caCertData (security) The CA Cert Data String caCertFile (security) The CA Cert File String clientCertData (security) The Client Cert Data String clientCertFile (security) The Client Cert File String clientKeyAlgo (security) The Key Algorithm used by the client String clientKeyData (security) The Client Key data String clientKeyFile (security) The Client Key file String clientKeyPassphrase (security) The Client Key Passphrase String oauthToken (security) The Auth Token String password (security) Password to connect to Kubernetes String trustCerts (security) Define if the certs we used are trusted anyway or not Boolean username (security) Username to connect to Kubernetes String | [
"openshift-build-configs:masterUrl"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/openshift-build-configs-component |
Appendix A. S2I scripts and Maven | Appendix A. S2I scripts and Maven The Red Hat JBoss Web Server for OpenShift image includes S2I scripts and Maven. A.1. Maven artifact repository mirrors and JWS for OpenShift A Maven repository holds build artifacts and dependencies, such as the project Java archive (JAR) files, library JAR files, plugins or any other project-specific artifacts. A Maven repository also defines locations that you can download artifacts from while performing the source-to-image (S2I) build. In addition to using the Maven Central Repository , some organizations also deploy a local custom repository (mirror). A local mirror provides the following benefits: Availability of a synchronized mirror that is geographically closer and faster Greater control over the repository content Possibility to share artifacts across different teams (developers and continuous integration (CI)) without relying on public servers and repositories Improved build times A Maven repository manager can serve as local cache to a mirror. If the repository manager is already deployed and can be reached externally at a specified URL location, the S2I build can use this repository. You can use an internal Maven repository by adding the MAVEN_MIRROR_URL environment variable to the build configuration of the application. Additional resources Apache Maven Project: Best Practice - Using a Repository Manager A.1.1. Using an internal Maven repository for a new build configuration You can add the MAVEN_MIRROR_URL environment variable to a new build configuration of your application, by specifying the --build-env option with the oc new-app command or the oc new-build command. Procedure Enter the following command: Note The preceding command assumes that the repository manager is already deployed and can be reached at http://10.0.0.1:8080/repository/internal/ . A.1.2. Using an internal Maven repository for an existing build configuration You can add the MAVEN_MIRROR_URL environment variable to an existing build configuration of your application, by specifying the name of the build configuration with the oc set env command. Procedure Identify the build configuration that requires the MAVEN_MIRROR_URL variable: The preceding command produces the following type of output: Note In the preceding example, jws is the name of the build configuration. Add the MAVEN_MIRROR_URL environment variable to buildconfig/jws : Verify the build configuration has updated: Schedule a new build of the application by using oc start-build Note During the application build process, Maven dependencies are downloaded from the repository manager rather than from the default public repositories. When the build process is completed, the mirror contains all the dependencies that are retrieved and used during the build process. A.2. Scripts included on the Red Hat JBoss Web Server for OpenShift image The Red Hat JBoss Web Server for OpenShift image includes scripts to run Catalina and to use Maven to create and deploy the .war package. run Runs Catalina (Tomcat) assemble Uses Maven to build the web application source, create the .war file, and move the .war file to the USDJWS_HOME/tomcat/webapps directory. A.3. JWS for OpenShift datasources JWS for OpenShift provides three type of data sources: Default internal data sources PostgreSQL, MySQL, and MongoDB data sources are available on OpenShift by default through the Red Hat Registry. These data sources do not require additional environment files to be configured for image streams. To enable a database to be discovered and used as a data source, you can set the DB_SERVICE_PREFIX_MAPPING environment variable to the name of the OpenShift service. Other internal data sources These data sources are run on OpenShift but they are not available by default through the Red Hat Registry. Environment files that are added to OpenShift Secrets provide configuration of other internal data sources. External data sources These data sources are not run on OpenShift. Environment files that are added to OpenShift Secrets provide configuration of external data sources. ENV_FILES property You can add the environment variables for data sources to the OpenShift Secret for the project. You can use the ENV_FILES property to call these environment files within the template. DB_SERVICE_PREFIX_MAPPING environment variable Data sources are automatically created based on the value of certain environment variables. The DB_SERVICE_PREFIX_MAPPING environment variable defines JNDI mappings for the data sources. The allowed value for the DB_SERVICE_PREFIX_MAPPING variable is a comma-separated list of POOLNAME-DATABASETYPE=PREFIX triplets. Each triplet consists of the following values: POOLNAME is used as the pool-name in the data source. DATABASETYPE is the database driver to use. PREFIX is the prefix in the names of environment variables that are used to configure the data source. For each POOLNAME-DATABASETYPE=PREFIX triplet that is defined in the DB_SERVICE_PREFIX_MAPPING environment variable, the launch script creates a separate data source, which is executed when running the image. Additional resources Datasource configuration environment variables A.4. JWS for OpenShift compatible environment variables You can modify the build configuration by including environment variables with the source-to-image (S2I) build command. For more information, see Maven artifact repository mirrors and JWS for OpenShift . The following table lists the valid environment variables for the Red Hat JBoss Web Server for OpenShift images: Variable Name Display Name Description Example Value ARTIFACT_DIR N/A .war , .ear , and .jar files from this directory will be copied into the deployments directory target APPLICATION_NAME Application Name The name for the application jws-app CONTEXT_DIR Context Directory Path within Git project to build; empty for root project directory tomcat-websocket-chat GITHUB_WEBHOOK_SECRET Github Webhook Secret Github trigger secret Expression from: [a-zA-Z0-9]{8} GENERIC_WEBHOOK_SECRET Generic Webhook Secret Generic build trigger secret Expression from: [a-zA-Z0-9]{8} HOSTNAME_HTTP Custom HTTP Route Hostname Custom hostname for http service route. Leave blank for default hostname <application-name>-<project>.<default-domain-suffix> HOSTNAME_HTTPS Custom HTTPS Route Hostname Custom hostname for https service route. Leave blank for default hostname <application-name>-<project>.<default-domain-suffix> IMAGE_STREAM_NAMESPACE Imagestream Namespace Namespace in which the ImageStreams for Red Hat Middleware images are installed openshift JWS_HTTPS_CERTIFICATE Certificate File Name The name of the certificate file rsa-cert.pem JWS_HTTPS_CERTIFICATE_CHAIN Certificate Chain File Name The name of the certificate chain file ca-chain.cert.pem JWS_HTTPS_CERTIFICATE_DIR Certificate Directory Name The name of a directory where the certificate is stored cert JWS_HTTPS_CERTIFICATE_KEY Certificate Key File Name The name of the certificate key file rsa-key.pem JWS_HTTPS_CERTIFICATE_PASSWORD Certificate Password The Certificate Password P5ssw0rd SOURCE_REPOSITORY_URL Git Repository URL Git source URI for Application https://github.com/web-servers/tomcat-websocket-chat-quickstart.git SOURCE_REPOSITORY_REFERENCE Git Reference Git branch/tag reference 1.2 IMAGE_STREAM_NAMESPACE Imagestream Namespace Namespace in which the ImageStreams for Red Hat Middleware images are installed openshift MAVEN_MIRROR_URL Maven Mirror URL URL of a Maven mirror/repository manager to configure. http://10.0.0.1:8080/repository/internal/ | [
"oc new-app https://github.com/web-servers/tomcat-websocket-chat-quickstart.git#main --image-stream=jboss-webserver60-openjdk17-tomcat10-openshift-ubi8:latest --context-dir='tomcat-websocket-chat' --build-env MAVEN_MIRROR_URL=http://10.0.0.1:8080/repository/internal/ --name=jws-wsch-app",
"oc get bc -o name",
"buildconfig/jws",
"oc set env bc/jws MAVEN_MIRROR_URL=\"http://10.0.0.1:8080/repository/internal/\" buildconfig \"jws\" updated",
"oc set env bc/jws --list buildconfigs jws MAVEN_MIRROR_URL=http://10.0.0.1:8080/repository/internal/"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_for_openshift/jws_on_openshift_reference |
Chapter 5. ImageStreamImport [image.openshift.io/v1] | Chapter 5. ImageStreamImport [image.openshift.io/v1] Description The image stream import resource provides an easy way for a user to find and import container images from other container image registries into the server. Individual images or an entire image repository may be imported, and users may choose to see the results of the import prior to tagging the resulting images into the specified image stream. This API is intended for end-user tools that need to see the metadata of the image prior to import (for instance, to generate an application from it). Clients that know the desired image can continue to create spec.tags directly into their image streams. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec status 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ImageStreamImportSpec defines what images should be imported. status object ImageStreamImportStatus contains information about the status of an image stream import. 5.1.1. .spec Description ImageStreamImportSpec defines what images should be imported. Type object Required import Property Type Description images array Images are a list of individual images to import. images[] object ImageImportSpec describes a request to import a specific image. import boolean Import indicates whether to perform an import - if so, the specified tags are set on the spec and status of the image stream defined by the type meta. repository object RepositoryImportSpec describes a request to import images from a container image repository. 5.1.2. .spec.images Description Images are a list of individual images to import. Type array 5.1.3. .spec.images[] Description ImageImportSpec describes a request to import a specific image. Type object Required from Property Type Description from ObjectReference From is the source of an image to import; only kind DockerImage is allowed importPolicy object TagImportPolicy controls how images related to this tag will be imported. includeManifest boolean IncludeManifest determines if the manifest for each image is returned in the response referencePolicy object TagReferencePolicy describes how pull-specs for images in this image stream tag are generated when image change triggers in deployment configs or builds are resolved. This allows the image stream author to control how images are accessed. to LocalObjectReference_v2 To is a tag in the current image stream to assign the imported image to, if name is not specified the default tag from from.name will be used 5.1.4. .spec.images[].importPolicy Description TagImportPolicy controls how images related to this tag will be imported. Type object Property Type Description importMode string ImportMode describes how to import an image manifest. insecure boolean Insecure is true if the server may bypass certificate verification or connect directly over HTTP during image import. scheduled boolean Scheduled indicates to the server that this tag should be periodically checked to ensure it is up to date, and imported 5.1.5. .spec.images[].referencePolicy Description TagReferencePolicy describes how pull-specs for images in this image stream tag are generated when image change triggers in deployment configs or builds are resolved. This allows the image stream author to control how images are accessed. Type object Required type Property Type Description type string Type determines how the image pull spec should be transformed when the image stream tag is used in deployment config triggers or new builds. The default value is Source , indicating the original location of the image should be used (if imported). The user may also specify Local , indicating that the pull spec should point to the integrated container image registry and leverage the registry's ability to proxy the pull to an upstream registry. Local allows the credentials used to pull this image to be managed from the image stream's namespace, so others on the platform can access a remote image but have no access to the remote secret. It also allows the image layers to be mirrored into the local registry which the images can still be pulled even if the upstream registry is unavailable. 5.1.6. .spec.repository Description RepositoryImportSpec describes a request to import images from a container image repository. Type object Required from Property Type Description from ObjectReference From is the source for the image repository to import; only kind DockerImage and a name of a container image repository is allowed importPolicy object TagImportPolicy controls how images related to this tag will be imported. includeManifest boolean IncludeManifest determines if the manifest for each image is returned in the response referencePolicy object TagReferencePolicy describes how pull-specs for images in this image stream tag are generated when image change triggers in deployment configs or builds are resolved. This allows the image stream author to control how images are accessed. 5.1.7. .spec.repository.importPolicy Description TagImportPolicy controls how images related to this tag will be imported. Type object Property Type Description importMode string ImportMode describes how to import an image manifest. insecure boolean Insecure is true if the server may bypass certificate verification or connect directly over HTTP during image import. scheduled boolean Scheduled indicates to the server that this tag should be periodically checked to ensure it is up to date, and imported 5.1.8. .spec.repository.referencePolicy Description TagReferencePolicy describes how pull-specs for images in this image stream tag are generated when image change triggers in deployment configs or builds are resolved. This allows the image stream author to control how images are accessed. Type object Required type Property Type Description type string Type determines how the image pull spec should be transformed when the image stream tag is used in deployment config triggers or new builds. The default value is Source , indicating the original location of the image should be used (if imported). The user may also specify Local , indicating that the pull spec should point to the integrated container image registry and leverage the registry's ability to proxy the pull to an upstream registry. Local allows the credentials used to pull this image to be managed from the image stream's namespace, so others on the platform can access a remote image but have no access to the remote secret. It also allows the image layers to be mirrored into the local registry which the images can still be pulled even if the upstream registry is unavailable. 5.1.9. .status Description ImageStreamImportStatus contains information about the status of an image stream import. Type object Property Type Description images array Images is set with the result of importing spec.images images[] object ImageImportStatus describes the result of an image import. import object An ImageStream stores a mapping of tags to images, metadata overrides that are applied when images are tagged in a stream, and an optional reference to a container image repository on a registry. Users typically update the spec.tags field to point to external images which are imported from container registries using credentials in your namespace with the pull secret type, or to existing image stream tags and images which are immediately accessible for tagging or pulling. The history of images applied to a tag is visible in the status.tags field and any user who can view an image stream is allowed to tag that image into their own image streams. Access to pull images from the integrated registry is granted by having the "get imagestreams/layers" permission on a given image stream. Users may remove a tag by deleting the imagestreamtag resource, which causes both spec and status for that tag to be removed. Image stream history is retained until an administrator runs the prune operation, which removes references that are no longer in use. To preserve a historical image, ensure there is a tag in spec pointing to that image by its digest. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). repository object RepositoryImportStatus describes the result of an image repository import 5.1.10. .status.images Description Images is set with the result of importing spec.images Type array 5.1.11. .status.images[] Description ImageImportStatus describes the result of an image import. Type object Required status Property Type Description image object Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). manifests array Manifests holds sub-manifests metadata when importing a manifest list manifests[] object Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). status Status_v5 Status is the status of the image import, including errors encountered while retrieving the image tag string Tag is the tag this image was located under, if any 5.1.12. .status.images[].image Description Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dockerImageConfig string DockerImageConfig is a JSON blob that the runtime uses to set up the container. This is a part of manifest schema v2. Will not be set when the image represents a manifest list. dockerImageLayers array DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. dockerImageLayers[] object ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. dockerImageManifest string DockerImageManifest is the raw JSON of the manifest dockerImageManifestMediaType string DockerImageManifestMediaType specifies the mediaType of manifest. This is a part of manifest schema v2. dockerImageManifests array DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. dockerImageManifests[] object ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. dockerImageMetadata RawExtension DockerImageMetadata contains metadata about this image dockerImageMetadataVersion string DockerImageMetadataVersion conveys the version of the object, which if empty defaults to "1.0" dockerImageReference string DockerImageReference is the string that can be used to pull this image. dockerImageSignatures array (string) DockerImageSignatures provides the signatures as opaque blobs. This is a part of manifest schema v1. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signatures array Signatures holds all signatures of the image. signatures[] object ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). 5.1.13. .status.images[].image.dockerImageLayers Description DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. Type array 5.1.14. .status.images[].image.dockerImageLayers[] Description ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. Type object Required name size mediaType Property Type Description mediaType string MediaType of the referenced object. name string Name of the layer as defined by the underlying store. size integer Size of the layer in bytes as defined by the underlying store. 5.1.15. .status.images[].image.dockerImageManifests Description DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. Type array 5.1.16. .status.images[].image.dockerImageManifests[] Description ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. Type object Required digest mediaType manifestSize architecture os Property Type Description architecture string Architecture specifies the supported CPU architecture, for example amd64 or ppc64le . digest string Digest is the unique identifier for the manifest. It refers to an Image object. manifestSize integer ManifestSize represents the size of the raw object contents, in bytes. mediaType string MediaType defines the type of the manifest, possible values are application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json or application/vnd.docker.distribution.manifest.v1+json. os string OS specifies the operating system, for example linux . variant string Variant is an optional field repreenting a variant of the CPU, for example v6 to specify a particular CPU variant of the ARM CPU. 5.1.17. .status.images[].image.signatures Description Signatures holds all signatures of the image. Type array 5.1.18. .status.images[].image.signatures[] Description ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required type content Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array Conditions represent the latest available observations of a signature's current state. conditions[] object SignatureCondition describes an image signature condition of particular kind at particular probe time. content string Required: An opaque binary string which is an image's signature. created Time If specified, it is the time of signature's creation. imageIdentity string A human readable string representing image's identity. It could be a product name and version, or an image pull spec (e.g. "registry.access.redhat.com/rhel7/rhel:7.2"). issuedBy object SignatureIssuer holds information about an issuer of signing certificate or key. issuedTo object SignatureSubject holds information about a person or entity who created the signature. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signedClaims object (string) Contains claims from the signature. type string Required: Describes a type of stored blob. 5.1.19. .status.images[].image.signatures[].conditions Description Conditions represent the latest available observations of a signature's current state. Type array 5.1.20. .status.images[].image.signatures[].conditions[] Description SignatureCondition describes an image signature condition of particular kind at particular probe time. Type object Required type status Property Type Description lastProbeTime Time Last time the condition was checked. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of signature condition, Complete or Failed. 5.1.21. .status.images[].image.signatures[].issuedBy Description SignatureIssuer holds information about an issuer of signing certificate or key. Type object Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. 5.1.22. .status.images[].image.signatures[].issuedTo Description SignatureSubject holds information about a person or entity who created the signature. Type object Required publicKeyID Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. publicKeyID string If present, it is a human readable key id of public key belonging to the subject used to verify image signature. It should contain at least 64 lowest bits of public key's fingerprint (e.g. 0x685ebe62bf278440). 5.1.23. .status.images[].manifests Description Manifests holds sub-manifests metadata when importing a manifest list Type array 5.1.24. .status.images[].manifests[] Description Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dockerImageConfig string DockerImageConfig is a JSON blob that the runtime uses to set up the container. This is a part of manifest schema v2. Will not be set when the image represents a manifest list. dockerImageLayers array DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. dockerImageLayers[] object ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. dockerImageManifest string DockerImageManifest is the raw JSON of the manifest dockerImageManifestMediaType string DockerImageManifestMediaType specifies the mediaType of manifest. This is a part of manifest schema v2. dockerImageManifests array DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. dockerImageManifests[] object ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. dockerImageMetadata RawExtension DockerImageMetadata contains metadata about this image dockerImageMetadataVersion string DockerImageMetadataVersion conveys the version of the object, which if empty defaults to "1.0" dockerImageReference string DockerImageReference is the string that can be used to pull this image. dockerImageSignatures array (string) DockerImageSignatures provides the signatures as opaque blobs. This is a part of manifest schema v1. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signatures array Signatures holds all signatures of the image. signatures[] object ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). 5.1.25. .status.images[].manifests[].dockerImageLayers Description DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. Type array 5.1.26. .status.images[].manifests[].dockerImageLayers[] Description ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. Type object Required name size mediaType Property Type Description mediaType string MediaType of the referenced object. name string Name of the layer as defined by the underlying store. size integer Size of the layer in bytes as defined by the underlying store. 5.1.27. .status.images[].manifests[].dockerImageManifests Description DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. Type array 5.1.28. .status.images[].manifests[].dockerImageManifests[] Description ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. Type object Required digest mediaType manifestSize architecture os Property Type Description architecture string Architecture specifies the supported CPU architecture, for example amd64 or ppc64le . digest string Digest is the unique identifier for the manifest. It refers to an Image object. manifestSize integer ManifestSize represents the size of the raw object contents, in bytes. mediaType string MediaType defines the type of the manifest, possible values are application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json or application/vnd.docker.distribution.manifest.v1+json. os string OS specifies the operating system, for example linux . variant string Variant is an optional field repreenting a variant of the CPU, for example v6 to specify a particular CPU variant of the ARM CPU. 5.1.29. .status.images[].manifests[].signatures Description Signatures holds all signatures of the image. Type array 5.1.30. .status.images[].manifests[].signatures[] Description ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required type content Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array Conditions represent the latest available observations of a signature's current state. conditions[] object SignatureCondition describes an image signature condition of particular kind at particular probe time. content string Required: An opaque binary string which is an image's signature. created Time If specified, it is the time of signature's creation. imageIdentity string A human readable string representing image's identity. It could be a product name and version, or an image pull spec (e.g. "registry.access.redhat.com/rhel7/rhel:7.2"). issuedBy object SignatureIssuer holds information about an issuer of signing certificate or key. issuedTo object SignatureSubject holds information about a person or entity who created the signature. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signedClaims object (string) Contains claims from the signature. type string Required: Describes a type of stored blob. 5.1.31. .status.images[].manifests[].signatures[].conditions Description Conditions represent the latest available observations of a signature's current state. Type array 5.1.32. .status.images[].manifests[].signatures[].conditions[] Description SignatureCondition describes an image signature condition of particular kind at particular probe time. Type object Required type status Property Type Description lastProbeTime Time Last time the condition was checked. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of signature condition, Complete or Failed. 5.1.33. .status.images[].manifests[].signatures[].issuedBy Description SignatureIssuer holds information about an issuer of signing certificate or key. Type object Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. 5.1.34. .status.images[].manifests[].signatures[].issuedTo Description SignatureSubject holds information about a person or entity who created the signature. Type object Required publicKeyID Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. publicKeyID string If present, it is a human readable key id of public key belonging to the subject used to verify image signature. It should contain at least 64 lowest bits of public key's fingerprint (e.g. 0x685ebe62bf278440). 5.1.35. .status.import Description An ImageStream stores a mapping of tags to images, metadata overrides that are applied when images are tagged in a stream, and an optional reference to a container image repository on a registry. Users typically update the spec.tags field to point to external images which are imported from container registries using credentials in your namespace with the pull secret type, or to existing image stream tags and images which are immediately accessible for tagging or pulling. The history of images applied to a tag is visible in the status.tags field and any user who can view an image stream is allowed to tag that image into their own image streams. Access to pull images from the integrated registry is granted by having the "get imagestreams/layers" permission on a given image stream. Users may remove a tag by deleting the imagestreamtag resource, which causes both spec and status for that tag to be removed. Image stream history is retained until an administrator runs the prune operation, which removes references that are no longer in use. To preserve a historical image, ensure there is a tag in spec pointing to that image by its digest. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ImageStreamSpec represents options for ImageStreams. status object ImageStreamStatus contains information about the state of this image stream. 5.1.36. .status.import.spec Description ImageStreamSpec represents options for ImageStreams. Type object Property Type Description dockerImageRepository string dockerImageRepository is optional, if specified this stream is backed by a container repository on this server Deprecated: This field is deprecated as of v3.7 and will be removed in a future release. Specify the source for the tags to be imported in each tag via the spec.tags.from reference instead. lookupPolicy object ImageLookupPolicy describes how an image stream can be used to override the image references used by pods, builds, and other resources in a namespace. tags array tags map arbitrary string values to specific image locators tags[] object TagReference specifies optional annotations for images using this tag and an optional reference to an ImageStreamTag, ImageStreamImage, or DockerImage this tag should track. 5.1.37. .status.import.spec.lookupPolicy Description ImageLookupPolicy describes how an image stream can be used to override the image references used by pods, builds, and other resources in a namespace. Type object Required local Property Type Description local boolean local will change the docker short image references (like "mysql" or "php:latest") on objects in this namespace to the image ID whenever they match this image stream, instead of reaching out to a remote registry. The name will be fully qualified to an image ID if found. The tag's referencePolicy is taken into account on the replaced value. Only works within the current namespace. 5.1.38. .status.import.spec.tags Description tags map arbitrary string values to specific image locators Type array 5.1.39. .status.import.spec.tags[] Description TagReference specifies optional annotations for images using this tag and an optional reference to an ImageStreamTag, ImageStreamImage, or DockerImage this tag should track. Type object Required name Property Type Description annotations object (string) Optional; if specified, annotations that are applied to images retrieved via ImageStreamTags. from ObjectReference Optional; if specified, a reference to another image that this tag should point to. Valid values are ImageStreamTag, ImageStreamImage, and DockerImage. ImageStreamTag references can only reference a tag within this same ImageStream. generation integer Generation is a counter that tracks mutations to the spec tag (user intent). When a tag reference is changed the generation is set to match the current stream generation (which is incremented every time spec is changed). Other processes in the system like the image importer observe that the generation of spec tag is newer than the generation recorded in the status and use that as a trigger to import the newest remote tag. To trigger a new import, clients may set this value to zero which will reset the generation to the latest stream generation. Legacy clients will send this value as nil which will be merged with the current tag generation. importPolicy object TagImportPolicy controls how images related to this tag will be imported. name string Name of the tag reference boolean Reference states if the tag will be imported. Default value is false, which means the tag will be imported. referencePolicy object TagReferencePolicy describes how pull-specs for images in this image stream tag are generated when image change triggers in deployment configs or builds are resolved. This allows the image stream author to control how images are accessed. 5.1.40. .status.import.spec.tags[].importPolicy Description TagImportPolicy controls how images related to this tag will be imported. Type object Property Type Description importMode string ImportMode describes how to import an image manifest. insecure boolean Insecure is true if the server may bypass certificate verification or connect directly over HTTP during image import. scheduled boolean Scheduled indicates to the server that this tag should be periodically checked to ensure it is up to date, and imported 5.1.41. .status.import.spec.tags[].referencePolicy Description TagReferencePolicy describes how pull-specs for images in this image stream tag are generated when image change triggers in deployment configs or builds are resolved. This allows the image stream author to control how images are accessed. Type object Required type Property Type Description type string Type determines how the image pull spec should be transformed when the image stream tag is used in deployment config triggers or new builds. The default value is Source , indicating the original location of the image should be used (if imported). The user may also specify Local , indicating that the pull spec should point to the integrated container image registry and leverage the registry's ability to proxy the pull to an upstream registry. Local allows the credentials used to pull this image to be managed from the image stream's namespace, so others on the platform can access a remote image but have no access to the remote secret. It also allows the image layers to be mirrored into the local registry which the images can still be pulled even if the upstream registry is unavailable. 5.1.42. .status.import.status Description ImageStreamStatus contains information about the state of this image stream. Type object Required dockerImageRepository Property Type Description dockerImageRepository string DockerImageRepository represents the effective location this stream may be accessed at. May be empty until the server determines where the repository is located publicDockerImageRepository string PublicDockerImageRepository represents the public location from where the image can be pulled outside the cluster. This field may be empty if the administrator has not exposed the integrated registry externally. tags array Tags are a historical record of images associated with each tag. The first entry in the TagEvent array is the currently tagged image. tags[] object NamedTagEventList relates a tag to its image history. 5.1.43. .status.import.status.tags Description Tags are a historical record of images associated with each tag. The first entry in the TagEvent array is the currently tagged image. Type array 5.1.44. .status.import.status.tags[] Description NamedTagEventList relates a tag to its image history. Type object Required tag items Property Type Description conditions array Conditions is an array of conditions that apply to the tag event list. conditions[] object TagEventCondition contains condition information for a tag event. items array Standard object's metadata. items[] object TagEvent is used by ImageStreamStatus to keep a historical record of images associated with a tag. tag string Tag is the tag for which the history is recorded 5.1.45. .status.import.status.tags[].conditions Description Conditions is an array of conditions that apply to the tag event list. Type array 5.1.46. .status.import.status.tags[].conditions[] Description TagEventCondition contains condition information for a tag event. Type object Required type status generation Property Type Description generation integer Generation is the spec tag generation that this status corresponds to lastTransitionTime Time LastTransitionTIme is the time the condition transitioned from one status to another. message string Message is a human readable description of the details about last transition, complementing reason. reason string Reason is a brief machine readable explanation for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of tag event condition, currently only ImportSuccess 5.1.47. .status.import.status.tags[].items Description Standard object's metadata. Type array 5.1.48. .status.import.status.tags[].items[] Description TagEvent is used by ImageStreamStatus to keep a historical record of images associated with a tag. Type object Required created dockerImageReference image generation Property Type Description created Time Created holds the time the TagEvent was created dockerImageReference string DockerImageReference is the string that can be used to pull this image generation integer Generation is the spec tag generation that resulted in this tag being updated image string Image is the image 5.1.49. .status.repository Description RepositoryImportStatus describes the result of an image repository import Type object Property Type Description additionalTags array (string) AdditionalTags are tags that exist in the repository but were not imported because a maximum limit of automatic imports was applied. images array Images is a list of images successfully retrieved by the import of the repository. images[] object ImageImportStatus describes the result of an image import. status Status_v5 Status reflects whether any failure occurred during import 5.1.50. .status.repository.images Description Images is a list of images successfully retrieved by the import of the repository. Type array 5.1.51. .status.repository.images[] Description ImageImportStatus describes the result of an image import. Type object Required status Property Type Description image object Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). manifests array Manifests holds sub-manifests metadata when importing a manifest list manifests[] object Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). status Status_v5 Status is the status of the image import, including errors encountered while retrieving the image tag string Tag is the tag this image was located under, if any 5.1.52. .status.repository.images[].image Description Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dockerImageConfig string DockerImageConfig is a JSON blob that the runtime uses to set up the container. This is a part of manifest schema v2. Will not be set when the image represents a manifest list. dockerImageLayers array DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. dockerImageLayers[] object ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. dockerImageManifest string DockerImageManifest is the raw JSON of the manifest dockerImageManifestMediaType string DockerImageManifestMediaType specifies the mediaType of manifest. This is a part of manifest schema v2. dockerImageManifests array DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. dockerImageManifests[] object ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. dockerImageMetadata RawExtension DockerImageMetadata contains metadata about this image dockerImageMetadataVersion string DockerImageMetadataVersion conveys the version of the object, which if empty defaults to "1.0" dockerImageReference string DockerImageReference is the string that can be used to pull this image. dockerImageSignatures array (string) DockerImageSignatures provides the signatures as opaque blobs. This is a part of manifest schema v1. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signatures array Signatures holds all signatures of the image. signatures[] object ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). 5.1.53. .status.repository.images[].image.dockerImageLayers Description DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. Type array 5.1.54. .status.repository.images[].image.dockerImageLayers[] Description ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. Type object Required name size mediaType Property Type Description mediaType string MediaType of the referenced object. name string Name of the layer as defined by the underlying store. size integer Size of the layer in bytes as defined by the underlying store. 5.1.55. .status.repository.images[].image.dockerImageManifests Description DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. Type array 5.1.56. .status.repository.images[].image.dockerImageManifests[] Description ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. Type object Required digest mediaType manifestSize architecture os Property Type Description architecture string Architecture specifies the supported CPU architecture, for example amd64 or ppc64le . digest string Digest is the unique identifier for the manifest. It refers to an Image object. manifestSize integer ManifestSize represents the size of the raw object contents, in bytes. mediaType string MediaType defines the type of the manifest, possible values are application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json or application/vnd.docker.distribution.manifest.v1+json. os string OS specifies the operating system, for example linux . variant string Variant is an optional field repreenting a variant of the CPU, for example v6 to specify a particular CPU variant of the ARM CPU. 5.1.57. .status.repository.images[].image.signatures Description Signatures holds all signatures of the image. Type array 5.1.58. .status.repository.images[].image.signatures[] Description ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required type content Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array Conditions represent the latest available observations of a signature's current state. conditions[] object SignatureCondition describes an image signature condition of particular kind at particular probe time. content string Required: An opaque binary string which is an image's signature. created Time If specified, it is the time of signature's creation. imageIdentity string A human readable string representing image's identity. It could be a product name and version, or an image pull spec (e.g. "registry.access.redhat.com/rhel7/rhel:7.2"). issuedBy object SignatureIssuer holds information about an issuer of signing certificate or key. issuedTo object SignatureSubject holds information about a person or entity who created the signature. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signedClaims object (string) Contains claims from the signature. type string Required: Describes a type of stored blob. 5.1.59. .status.repository.images[].image.signatures[].conditions Description Conditions represent the latest available observations of a signature's current state. Type array 5.1.60. .status.repository.images[].image.signatures[].conditions[] Description SignatureCondition describes an image signature condition of particular kind at particular probe time. Type object Required type status Property Type Description lastProbeTime Time Last time the condition was checked. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of signature condition, Complete or Failed. 5.1.61. .status.repository.images[].image.signatures[].issuedBy Description SignatureIssuer holds information about an issuer of signing certificate or key. Type object Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. 5.1.62. .status.repository.images[].image.signatures[].issuedTo Description SignatureSubject holds information about a person or entity who created the signature. Type object Required publicKeyID Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. publicKeyID string If present, it is a human readable key id of public key belonging to the subject used to verify image signature. It should contain at least 64 lowest bits of public key's fingerprint (e.g. 0x685ebe62bf278440). 5.1.63. .status.repository.images[].manifests Description Manifests holds sub-manifests metadata when importing a manifest list Type array 5.1.64. .status.repository.images[].manifests[] Description Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dockerImageConfig string DockerImageConfig is a JSON blob that the runtime uses to set up the container. This is a part of manifest schema v2. Will not be set when the image represents a manifest list. dockerImageLayers array DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. dockerImageLayers[] object ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. dockerImageManifest string DockerImageManifest is the raw JSON of the manifest dockerImageManifestMediaType string DockerImageManifestMediaType specifies the mediaType of manifest. This is a part of manifest schema v2. dockerImageManifests array DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. dockerImageManifests[] object ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. dockerImageMetadata RawExtension DockerImageMetadata contains metadata about this image dockerImageMetadataVersion string DockerImageMetadataVersion conveys the version of the object, which if empty defaults to "1.0" dockerImageReference string DockerImageReference is the string that can be used to pull this image. dockerImageSignatures array (string) DockerImageSignatures provides the signatures as opaque blobs. This is a part of manifest schema v1. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signatures array Signatures holds all signatures of the image. signatures[] object ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). 5.1.65. .status.repository.images[].manifests[].dockerImageLayers Description DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. Type array 5.1.66. .status.repository.images[].manifests[].dockerImageLayers[] Description ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. Type object Required name size mediaType Property Type Description mediaType string MediaType of the referenced object. name string Name of the layer as defined by the underlying store. size integer Size of the layer in bytes as defined by the underlying store. 5.1.67. .status.repository.images[].manifests[].dockerImageManifests Description DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. Type array 5.1.68. .status.repository.images[].manifests[].dockerImageManifests[] Description ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. Type object Required digest mediaType manifestSize architecture os Property Type Description architecture string Architecture specifies the supported CPU architecture, for example amd64 or ppc64le . digest string Digest is the unique identifier for the manifest. It refers to an Image object. manifestSize integer ManifestSize represents the size of the raw object contents, in bytes. mediaType string MediaType defines the type of the manifest, possible values are application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json or application/vnd.docker.distribution.manifest.v1+json. os string OS specifies the operating system, for example linux . variant string Variant is an optional field repreenting a variant of the CPU, for example v6 to specify a particular CPU variant of the ARM CPU. 5.1.69. .status.repository.images[].manifests[].signatures Description Signatures holds all signatures of the image. Type array 5.1.70. .status.repository.images[].manifests[].signatures[] Description ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required type content Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array Conditions represent the latest available observations of a signature's current state. conditions[] object SignatureCondition describes an image signature condition of particular kind at particular probe time. content string Required: An opaque binary string which is an image's signature. created Time If specified, it is the time of signature's creation. imageIdentity string A human readable string representing image's identity. It could be a product name and version, or an image pull spec (e.g. "registry.access.redhat.com/rhel7/rhel:7.2"). issuedBy object SignatureIssuer holds information about an issuer of signing certificate or key. issuedTo object SignatureSubject holds information about a person or entity who created the signature. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signedClaims object (string) Contains claims from the signature. type string Required: Describes a type of stored blob. 5.1.71. .status.repository.images[].manifests[].signatures[].conditions Description Conditions represent the latest available observations of a signature's current state. Type array 5.1.72. .status.repository.images[].manifests[].signatures[].conditions[] Description SignatureCondition describes an image signature condition of particular kind at particular probe time. Type object Required type status Property Type Description lastProbeTime Time Last time the condition was checked. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of signature condition, Complete or Failed. 5.1.73. .status.repository.images[].manifests[].signatures[].issuedBy Description SignatureIssuer holds information about an issuer of signing certificate or key. Type object Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. 5.1.74. .status.repository.images[].manifests[].signatures[].issuedTo Description SignatureSubject holds information about a person or entity who created the signature. Type object Required publicKeyID Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. publicKeyID string If present, it is a human readable key id of public key belonging to the subject used to verify image signature. It should contain at least 64 lowest bits of public key's fingerprint (e.g. 0x685ebe62bf278440). 5.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreamimports POST : create an ImageStreamImport 5.2.1. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreamimports Table 5.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create an ImageStreamImport Table 5.2. Body parameters Parameter Type Description body ImageStreamImport schema Table 5.3. HTTP responses HTTP code Reponse body 200 - OK ImageStreamImport schema 201 - Created ImageStreamImport schema 202 - Accepted ImageStreamImport schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/image_apis/imagestreamimport-image-openshift-io-v1 |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/openid_connect_oidc_authentication/making-open-source-more-inclusive |
User Guide Volume 1: Teiid Designer | User Guide Volume 1: Teiid Designer Red Hat JBoss Data Virtualization 6.4 This guide is for developers. Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/index |
Chapter 12. TTY Tapset | Chapter 12. TTY Tapset This family of probe points is used to probe TTY (Teletype) activities. It contains the following probe points: | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/tty-dot-stp |
22.4. The Log Collector Tool | 22.4. The Log Collector Tool 22.4.1. Log Collector A log collection tool is included in the Red Hat Virtualization Manager. This allows you to easily collect relevant logs from across the Red Hat Virtualization environment when requesting support. The log collection command is ovirt-log-collector . You are required to log in as the root user and provide the administration credentials for the Red Hat Virtualization environment. The ovirt-log-collector -h command displays usage information, including a list of all valid options for the ovirt-log-collector command. 22.4.2. Syntax for the ovirt-log-collector Command The basic syntax for the log collector command is: The two supported modes of operation are list and collect . The list parameter lists either the hosts, clusters, or data centers attached to the Red Hat Virtualization Manager. You are able to filter the log collection based on the listed objects. The collect parameter performs log collection from the Red Hat Virtualization Manager. The collected logs are placed in an archive file under the /tmp/logcollector directory. The ovirt-log-collector command assigns each log a specific file name. Unless another parameter is specified, the default action is to list the available hosts together with the data center and cluster to which they belong. You will be prompted to enter user names and passwords to retrieve certain logs. There are numerous parameters to further refine the ovirt-log-collector command. General options --version Displays the version number of the command in use and returns to prompt. -h , --help Displays command usage information and returns to prompt. --conf-file= PATH Sets PATH as the configuration file the tool is to use. --local-tmp= PATH Sets PATH as the directory in which logs are saved. The default directory is /tmp/logcollector . --ticket-number= TICKET Sets TICKET as the ticket, or case number, to associate with the SOS report. --upload= FTP_SERVER Sets FTP_SERVER as the destination for retrieved logs to be sent using FTP. Do not use this option unless advised to by a Red Hat support representative. --log-file= PATH Sets PATH as the specific file name the command should use for the log output. --quiet Sets quiet mode, reducing console output to a minimum. Quiet mode is off by default. -v , --verbose Sets verbose mode, providing more console output. Verbose mode is off by default. --time-only Displays only information about time differences between hosts, without generating a full SOS report. Red Hat Virtualization Manager Options These options filter the log collection and specify authentication details for the Red Hat Virtualization Manager. These parameters can be combined for specific commands. For example, ovirt-log-collector --user=admin@internal --cluster ClusterA,ClusterB --hosts "SalesHost"* specifies the user as admin@internal and limits the log collection to only SalesHost hosts in clusters A and B . --no-hypervisors Omits virtualization hosts from the log collection. --one-hypervisor-per-cluster Collects the logs of one host (the SPM, if there is one) from each cluster. -u USER , --user= USER Sets the user name for login. The USER is specified in the format user @ domain , where user is the user name and domain is the directory services domain in use. The user must exist in directory services and be known to the Red Hat Virtualization Manager. -r FQDN , --rhevm= FQDN Sets the fully qualified domain name of the Red Hat Virtualization Manager from which to collect logs, where FQDN is replaced by the fully qualified domain name of the Manager. It is assumed that the log collector is being run on the same local host as the Red Hat Virtualization Manager; the default value is localhost . -c CLUSTER , --cluster= CLUSTER Collects logs from the virtualization hosts in the nominated CLUSTER in addition to logs from the Red Hat Virtualization Manager. The cluster(s) for inclusion must be specified in a comma-separated list of cluster names or match patterns. -d DATACENTER , --data-center= DATACENTER Collects logs from the virtualization hosts in the nominated DATACENTER in addition to logs from the Red Hat Virtualization Manager. The data center(s) for inclusion must be specified in a comma-separated list of data center names or match patterns. -H HOSTS_LIST , --hosts= HOSTS_LIST Collects logs from the virtualization hosts in the nominated HOSTS_LIST in addition to logs from the Red Hat Virtualization Manager. The hosts for inclusion must be specified in a comma-separated list of host names, fully qualified domain names, or IP addresses. Match patterns are also valid. SSH Configuration --ssh-port= PORT Sets PORT as the port to use for SSH connections with virtualization hosts. -k KEYFILE , --key-file= KEYFILE Sets KEYFILE as the public SSH key to be used for accessing the virtualization hosts. --max-connections= MAX_CONNECTIONS Sets MAX_CONNECTIONS as the maximum concurrent SSH connections for logs from virtualization hosts. The default is 10 . PostgreSQL Database Options The database user name and database name must be specified, using the pg-user and dbname parameters, if they have been changed from the default values. Use the pg-dbhost parameter if the database is not on the local host. Use the optional pg-host-key parameter to collect remote logs. The PostgreSQL SOS plugin must be installed on the database server for remote log collection to be successful. --no-postgresql Disables collection of database. The log collector will connect to the Red Hat Virtualization Manager PostgreSQL database and include the data in the log report unless the --no-postgresql parameter is specified. --pg-user= USER Sets USER as the user name to use for connections with the database server. The default is postgres . --pg-dbname= DBNAME Sets DBNAME as the database name to use for connections with the database server. The default is rhevm . --pg-dbhost= DBHOST Sets DBHOST as the host name for the database server. The default is localhost . --pg-host-key= KEYFILE Sets KEYFILE as the public identity file (private key) for the database server. This value is not set by default; it is required only where the database does not exist on the local host. 22.4.3. Basic Log Collector Usage When the ovirt-log-collector command is run without specifying any additional parameters, its default behavior is to collect all logs from the Red Hat Virtualization Manager and its attached hosts. It will also collect database logs unless the --no-postgresql parameter is added. In the following example, log collector is run to collect all logs from the Red Hat Virtualization Manager and three attached hosts. Example 22.2. Log Collector Usage | [
"ovirt-log-collector options list all|clusters|datacenters ovirt-log-collector options collect",
"ovirt-log-collector INFO: Gathering oVirt Engine information INFO: Gathering PostgreSQL the oVirt Engine database and log files from localhost Please provide REST API password for the admin@internal oVirt Engine user (CTRL+D to abort): About to collect information from 3 hypervisors. Continue? (Y/n): INFO: Gathering information from selected hypervisors INFO: collecting information from 192.168.122.250 INFO: collecting information from 192.168.122.251 INFO: collecting information from 192.168.122.252 INFO: finished collecting information from 192.168.122.250 INFO: finished collecting information from 192.168.122.251 INFO: finished collecting information from 192.168.122.252 Creating compressed archive INFO Log files have been collected and placed in /tmp/logcollector/sosreport-rhn-account-20110804121320-ce2a.tar.xz. The MD5 for this file is 6d741b78925998caff29020df2b2ce2a and its size is 26.7M"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-the_log_collector_tool |
Chapter 1. Introducing .NET 8.0 | Chapter 1. Introducing .NET 8.0 .NET is a general-purpose development platform featuring automatic memory management and modern programming languages. Using .NET, you can build high-quality applications efficiently. .NET is available on Red Hat Enterprise Linux (RHEL) and OpenShift Container Platform through certified containers. .NET offers the following features: The ability to follow a microservices-based approach, where some components are built with .NET and others with Java, but all can run on a common, supported platform on RHEL and OpenShift Container Platform. The capacity to more easily develop new .NET workloads on Microsoft Windows. You can deploy and run your applications on either RHEL or Windows Server. A heterogeneous data center, where the underlying infrastructure is capable of running .NET applications without having to rely solely on Windows Server. .NET 8.0 is supported on RHEL 8.9 and later, RHEL 9.3 and later, and supported OpenShift Container Platform versions. | null | https://docs.redhat.com/en/documentation/net/8.0/html/getting_started_with_.net_on_rhel_9/introducing-dotnet_getting-started-with-dotnet-on-rhel-9 |
Networking | Networking OpenShift Container Platform 4.18 Configuring and managing cluster networking Red Hat OpenShift Documentation Team | [
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-some-pods namespace: default spec: podSelector: matchLabels: role: app ingress: - from: - podSelector: matchLabels: role: backend ports: - protocol: TCP port: 80",
"apiVersion: v1 kind: Service metadata: name: my-service spec: selector: app: myapp ports: - protocol: TCP port: 80 targetPort: 8080",
"apiVersion: v1 kind: Service metadata: name: backend spec: selector: app: backend ports: - protocol: TCP port: 80 targetPort: 8080",
"apiVersion: v1 kind: Pod metadata: name: backend-pod labels: app: backend spec: containers: - name: backend-container image: my-backend-image ports: - containerPort: 8080",
"apiVersion: v1 kind: Network metadata: name: cluster platform: aws: 1 lbType: classic 2",
"apiVersion: v1 kind: Network metadata: name: cluster endpointPublishingStrategy: loadBalancer: 1 dnsManagementPolicy: Managed providerParameters: aws: classicLoadBalancer: 2 connectionIdleTimeout: 0s type: Classic type: AWS scope: External type: LoadBalancerService",
"apiVersion: apps/v1 kind: Deployment metadata: name: backend-deployment labels: app: backend spec: replicas: 3 selector: matchLabels: app: backend template: metadata: labels: app: backend spec: containers: - name: backend-container image: your-backend-image ports: - containerPort: 8080",
"apiVersion: v1 kind: Service metadata: name: backend-service spec: selector: app: backend ports: - protocol: TCP port: 80 targetPort: 8080",
"apiVersion: apps/v1 kind: Deployment metadata: name: frontend-deployment labels: app: frontend spec: replicas: 3 selector: matchLabels: app: frontend template: metadata: labels: app: frontend spec: containers: - name: frontend-container image: your-frontend-image ports: - containerPort: 80",
"oc apply -f frontend-deployment.yaml",
"fetch('http://backend-service.default.svc.cluster.local/api/data') .then(response => response.json()) .then(data => console.log(data));",
"oc new-project webapp-project",
"oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git --name=webapp",
"oc expose svc/webapp --hostname=webapp.example.com",
"oc create secret tls webapp-tls --cert=path/to/tls.crt --key=path/to/tls.key",
"oc patch route/webapp -p '{\"spec\":{\"tls\":{\"termination\":\"edge\",\"certificate\":\"path/to/tls.crt\",\"key\":\"path/to/tls.key\"}}}'",
"apiVersion: v1 kind: Service metadata: name: webapp-service namespace: webapp-project spec: selector: app: webapp ports: - protocol: TCP port: 80 targetPort: 8080",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: webapp-ingress namespace: webapp-project annotations: kubernetes.io/ingress.class: \"nginx\" spec: rules: - host: webapp.example.com http: paths: - path: / pathType: Prefix backend: service: name: webapp-service port: number: 80",
"oc create secret tls webapp-tls --cert=path/to/tls.crt --key=path/to/tls.key -n webapp-project",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: webapp-ingress namespace: webapp-project spec: tls: 1 - hosts: - webapp.example.com secretName: webapp-tls 2 rules: - host: webapp.example.com http: paths: - path: / pathType: Prefix backend: service: name: webapp-service port: number: 80",
"apiVersion: v1 kind: Service metadata: name: my-web-app spec: type: LoadBalancer selector: app: my-web-app ports: - port: 80 targetPort: 8080",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-web-app-ingress annotations: kubernetes.io/ingress.class: \"nginx\" spec: rules: - host: mywebapp.example.com http: paths: - path: / pathType: Prefix backend: service: name: my-web-app port: number: 80",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-web-app-ingress annotations: kubernetes.io/ingress.class: \"nginx\" spec: tls: - hosts: - mywebapp.example.com secretName: my-tls-secret rules: - host: mywebapp.example.com http: paths: - path: / pathType: Prefix backend: service: name: my-web-app port: number: 80",
"ssh -i <ssh-key-path> core@<master-hostname>",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: openshift-nmstate spec: finalizers: - kubernetes EOF",
"cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-nmstate namespace: openshift-nmstate spec: targetNamespaces: - openshift-nmstate EOF",
"cat << EOF| oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kubernetes-nmstate-operator namespace: openshift-nmstate spec: channel: stable installPlanApproval: Automatic name: kubernetes-nmstate-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc get clusterserviceversion -n openshift-nmstate -o custom-columns=Name:.metadata.name,Phase:.status.phase",
"Name Phase kubernetes-nmstate-operator.4.18.0-202210210157 Succeeded",
"cat << EOF | oc apply -f - apiVersion: nmstate.io/v1 kind: NMState metadata: name: nmstate EOF",
"oc get pod -n openshift-nmstate",
"Name Ready Status Restarts Age pod/nmstate-handler-wn55p 1/1 Running 0 77s pod/nmstate-operator-f6bb869b6-v5m92 1/1 Running 0 4m51s",
"interfaces: - name: br1 type: linux-bridge state: up ipv4: enabled: true dhcp: true dhcp-custom-hostname: foo bridge: options: stp: enabled: false port: []",
"controller_runtime_reconcile_time_seconds_bucket{controller=\"nodenetworkconfigurationenactment\",le=\"0.005\"} 16 controller_runtime_reconcile_time_seconds_bucket{controller=\"nodenetworkconfigurationenactment\",le=\"0.01\"} 16 controller_runtime_reconcile_time_seconds_bucket{controller=\"nodenetworkconfigurationenactment\",le=\"0.025\"} 16 HELP kubernetes_nmstate_features_applied Number of nmstate features applied labeled by its name TYPE kubernetes_nmstate_features_applied gauge kubernetes_nmstate_features_applied{name=\"dhcpv4-custom-hostname\"} 1",
"oc delete --namespace openshift-nmstate subscription kubernetes-nmstate-operator",
"oc get --namespace openshift-nmstate clusterserviceversion",
"NAME DISPLAY VERSION REPLACES PHASE kubernetes-nmstate-operator.v4.18.0 Kubernetes NMState Operator 4.18.0 Succeeded",
"oc delete --namespace openshift-nmstate clusterserviceversion kubernetes-nmstate-operator.v4.18.0",
"oc -n openshift-nmstate delete nmstate nmstate",
"oc delete --all deployments --namespace=openshift-nmstate",
"INDEX=USD(oc get console.operator.openshift.io cluster -o json | jq -r '.spec.plugins | to_entries[] | select(.value == \"nmstate-console-plugin\") | .key')",
"oc patch console.operator.openshift.io cluster --type=json -p \"[{\\\"op\\\": \\\"remove\\\", \\\"path\\\": \\\"/spec/plugins/USDINDEX\\\"}]\" 1",
"oc delete crd nmstates.nmstate.io",
"oc delete crd nodenetworkconfigurationenactments.nmstate.io",
"oc delete crd nodenetworkstates.nmstate.io",
"oc delete crd nodenetworkconfigurationpolicies.nmstate.io",
"oc delete namespace kubernetes-nmstate",
"remote error: tls: protocol version not supported",
"oc -n aws-load-balancer-operator get sub aws-load-balancer-operator --template='{{.status.installplan.name}}{{\"\\n\"}}'",
"install-zlfbt",
"oc -n aws-load-balancer-operator get ip <install_plan_name> --template='{{.status.phase}}{{\"\\n\"}}'",
"Complete",
"oc get -n aws-load-balancer-operator deployment/aws-load-balancer-operator-controller-manager",
"NAME READY UP-TO-DATE AVAILABLE AGE aws-load-balancer-operator-controller-manager 1/1 1 1 23h",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: <application_name> annotations: alb.ingress.kubernetes.io/subnets: <subnet_id> 1 spec: ingressClassName: alb rules: - http: paths: - path: / pathType: Exact backend: service: name: <application_name> port: number: 80",
"oc get infrastructure cluster -o=jsonpath=\"{.status.infrastructureName}\"",
"oc get authentication.config cluster -o=jsonpath=\"{.spec.serviceAccountIssuer}\" 1",
"curl --create-dirs -o <credentials_requests_dir>/operator.yaml https://raw.githubusercontent.com/openshift/aws-load-balancer-operator/main/hack/operator-credentials-request.yaml",
"ccoctl aws create-iam-roles --name <name> --region=<aws_region> --credentials-requests-dir=<credentials_requests_dir> --identity-provider-arn <oidc_arn>",
"2023/09/12 11:38:57 Role arn:aws:iam::777777777777:role/<name>-aws-load-balancer-operator-aws-load-balancer-operator created 1 2023/09/12 11:38:57 Saved credentials configuration to: /home/user/<credentials_requests_dir>/manifests/aws-load-balancer-operator-aws-load-balancer-operator-credentials.yaml 2023/09/12 11:38:58 Updated Role policy for Role <name>-aws-load-balancer-operator-aws-load-balancer-operator created",
"cat <<EOF > albo-operator-trust-policy.json { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"<oidc_arn>\" 1 }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"<cluster_oidc_endpoint>:sub\": \"system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-operator-controller-manager\" 2 } } } ] } EOF",
"aws iam create-role --role-name albo-operator --assume-role-policy-document file://albo-operator-trust-policy.json",
"ROLE arn:aws:iam::<aws_account_number>:role/albo-operator 2023-08-02T12:13:22Z 1 ASSUMEROLEPOLICYDOCUMENT 2012-10-17 STATEMENT sts:AssumeRoleWithWebIdentity Allow STRINGEQUALS system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-controller-manager PRINCIPAL arn:aws:iam:<aws_account_number>:oidc-provider/<cluster_oidc_endpoint>",
"curl -o albo-operator-permission-policy.json https://raw.githubusercontent.com/openshift/aws-load-balancer-operator/main/hack/operator-permission-policy.json",
"aws iam put-role-policy --role-name albo-operator --policy-name perms-policy-albo-operator --policy-document file://albo-operator-permission-policy.json",
"oc new-project aws-load-balancer-operator",
"cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator spec: targetNamespaces: [] EOF",
"cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator spec: channel: stable-v1 name: aws-load-balancer-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: \"<albo_role_arn>\" 1 EOF",
"curl --create-dirs -o <credentials_requests_dir>/controller.yaml https://raw.githubusercontent.com/openshift/aws-load-balancer-operator/main/hack/controller/controller-credentials-request.yaml",
"ccoctl aws create-iam-roles --name <name> --region=<aws_region> --credentials-requests-dir=<credentials_requests_dir> --identity-provider-arn <oidc_arn>",
"2023/09/12 11:38:57 Role arn:aws:iam::777777777777:role/<name>-aws-load-balancer-operator-aws-load-balancer-controller created 1 2023/09/12 11:38:57 Saved credentials configuration to: /home/user/<credentials_requests_dir>/manifests/aws-load-balancer-operator-aws-load-balancer-controller-credentials.yaml 2023/09/12 11:38:58 Updated Role policy for Role <name>-aws-load-balancer-operator-aws-load-balancer-controller created",
"cat <<EOF > albo-controller-trust-policy.json { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"<oidc_arn>\" 1 }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"<cluster_oidc_endpoint>:sub\": \"system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-operator-controller-manager\" 2 } } } ] } EOF",
"aws iam create-role --role-name albo-controller --assume-role-policy-document file://albo-controller-trust-policy.json",
"ROLE arn:aws:iam::<aws_account_number>:role/albo-controller 2023-08-02T12:13:22Z 1 ASSUMEROLEPOLICYDOCUMENT 2012-10-17 STATEMENT sts:AssumeRoleWithWebIdentity Allow STRINGEQUALS system:serviceaccount:aws-load-balancer-operator:aws-load-balancer-operator-controller-manager PRINCIPAL arn:aws:iam:<aws_account_number>:oidc-provider/<cluster_oidc_endpoint>",
"curl -o albo-controller-permission-policy.json https://raw.githubusercontent.com/openshift/aws-load-balancer-operator/main/assets/iam-policy.json",
"aws iam put-role-policy --role-name albo-controller --policy-name perms-policy-albo-controller --policy-document file://albo-controller-permission-policy.json",
"apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController 1 metadata: name: cluster 2 spec: credentialsRequestConfig: stsIAMRoleARN: <albc_role_arn> 3",
"apiVersion: v1 kind: Namespace metadata: name: aws-load-balancer-operator",
"oc apply -f namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: aws-lb-operatorgroup namespace: aws-load-balancer-operator spec: upgradeStrategy: Default",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator spec: channel: stable-v1 installPlanApproval: Automatic name: aws-load-balancer-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc apply -f subscription.yaml",
"oc -n aws-load-balancer-operator get subscription aws-load-balancer-operator --template='{{.status.installplan.name}}{{\"\\n\"}}'",
"oc -n aws-load-balancer-operator get ip <install_plan_name> --template='{{.status.phase}}{{\"\\n\"}}'",
"apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController 1 metadata: name: cluster 2 spec: subnetTagging: Auto 3 additionalResourceTags: 4 - key: example.org/security-scope value: staging ingressClass: alb 5 config: replicas: 2 6 enabledAddons: 7 - AWSWAFv2 8",
"oc create -f sample-aws-lb.yaml",
"apiVersion: apps/v1 kind: Deployment 1 metadata: name: <echoserver> 2 namespace: echoserver spec: selector: matchLabels: app: echoserver replicas: 3 3 template: metadata: labels: app: echoserver spec: containers: - image: openshift/origin-node command: - \"/bin/socat\" args: - TCP4-LISTEN:8080,reuseaddr,fork - EXEC:'/bin/bash -c \\\"printf \\\\\\\"HTTP/1.0 200 OK\\r\\n\\r\\n\\\\\\\"; sed -e \\\\\\\"/^\\r/q\\\\\\\"\\\"' imagePullPolicy: Always name: echoserver ports: - containerPort: 8080",
"apiVersion: v1 kind: Service 1 metadata: name: <echoserver> 2 namespace: echoserver spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: NodePort selector: app: echoserver",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: <name> 1 namespace: echoserver annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: instance spec: ingressClassName: alb rules: - http: paths: - path: / pathType: Exact backend: service: name: <echoserver> 2 port: number: 80",
"HOST=USD(oc get ingress -n echoserver echoserver --template='{{(index .status.loadBalancer.ingress 0).hostname}}')",
"curl USDHOST",
"oc -n aws-load-balancer-operator create configmap trusted-ca",
"oc -n aws-load-balancer-operator label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true",
"oc -n aws-load-balancer-operator patch subscription aws-load-balancer-operator --type='merge' -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"TRUSTED_CA_CONFIGMAP_NAME\",\"value\":\"trusted-ca\"}],\"volumes\":[{\"name\":\"trusted-ca\",\"configMap\":{\"name\":\"trusted-ca\"}}],\"volumeMounts\":[{\"name\":\"trusted-ca\",\"mountPath\":\"/etc/pki/tls/certs/albo-tls-ca-bundle.crt\",\"subPath\":\"ca-bundle.crt\"}]}}}'",
"oc -n aws-load-balancer-operator exec deploy/aws-load-balancer-operator-controller-manager -c manager -- bash -c \"ls -l /etc/pki/tls/certs/albo-tls-ca-bundle.crt; printenv TRUSTED_CA_CONFIGMAP_NAME\"",
"-rw-r--r--. 1 root 1000690000 5875 Jan 11 12:25 /etc/pki/tls/certs/albo-tls-ca-bundle.crt trusted-ca",
"oc -n aws-load-balancer-operator rollout restart deployment/aws-load-balancer-operator-controller-manager",
"apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController metadata: name: cluster spec: subnetTagging: Auto ingressClass: tls-termination 1",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: <example> 1 annotations: alb.ingress.kubernetes.io/scheme: internet-facing 2 alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:xxxxx 3 spec: ingressClassName: tls-termination 4 rules: - host: example.com 5 http: paths: - path: / pathType: Exact backend: service: name: <example_service> 6 port: number: 80",
"apiVersion: elbv2.k8s.aws/v1beta1 1 kind: IngressClassParams metadata: name: single-lb-params 2 spec: group: name: single-lb 3",
"oc create -f sample-single-lb-params.yaml",
"apiVersion: networking.k8s.io/v1 1 kind: IngressClass metadata: name: single-lb 2 spec: controller: ingress.k8s.aws/alb 3 parameters: apiGroup: elbv2.k8s.aws 4 kind: IngressClassParams 5 name: single-lb-params 6",
"oc create -f sample-single-lb-class.yaml",
"apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController metadata: name: cluster spec: subnetTagging: Auto ingressClass: single-lb 1",
"oc create -f sample-single-lb.yaml",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-1 1 annotations: alb.ingress.kubernetes.io/scheme: internet-facing 2 alb.ingress.kubernetes.io/group.order: \"1\" 3 alb.ingress.kubernetes.io/target-type: instance 4 spec: ingressClassName: single-lb 5 rules: - host: example.com 6 http: paths: - path: /blog 7 pathType: Prefix backend: service: name: example-1 8 port: number: 80 9 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-2 annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/group.order: \"2\" alb.ingress.kubernetes.io/target-type: instance spec: ingressClassName: single-lb rules: - host: example.com http: paths: - path: /store pathType: Prefix backend: service: name: example-2 port: number: 80 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-3 annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/group.order: \"3\" alb.ingress.kubernetes.io/target-type: instance spec: ingressClassName: single-lb rules: - host: example.com http: paths: - path: / pathType: Prefix backend: service: name: example-3 port: number: 80",
"oc create -f sample-multiple-ingress.yaml",
"oc logs -n aws-load-balancer-operator deployment/aws-load-balancer-operator-controller-manager -c manager",
"cat << EOF| oc create -f - apiVersion: v1 kind: Namespace metadata: labels: pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/enforce-version: v1.24 name: bpfman EOF",
"cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: bpfman-operators namespace: bpfman EOF",
"cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: bpfman-operator namespace: bpfman spec: name: bpfman-operator channel: alpha source: community-operators sourceNamespace: openshift-marketplace EOF",
"oc get ip -n bpfman",
"NAME CSV APPROVAL APPROVED install-ppjxl security-profiles-operator.v0.8.5 Automatic true",
"oc get csv -n bpfman",
"NAME DISPLAY VERSION REPLACES PHASE bpfman-operator.v0.5.0 eBPF Manager Operator 0.5.0 bpfman-operator.v0.4.2 Succeeded",
"curl -L https://github.com/bpfman/bpfman/releases/download/v0.5.1/go-xdp-counter-install-selinux.yaml -o go-xdp-counter-install-selinux.yaml",
"oc create -f go-xdp-counter-install-selinux.yaml",
"namespace/go-xdp-counter created serviceaccount/bpfman-app-go-xdp-counter created clusterrolebinding.rbac.authorization.k8s.io/xdp-binding created daemonset.apps/go-xdp-counter-ds created xdpprogram.bpfman.io/go-xdp-counter-example created selinuxprofile.security-profiles-operator.x-k8s.io/bpfman-secure created",
"oc get all -o wide -n go-xdp-counter",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/go-xdp-counter-ds-4m9cw 1/1 Running 0 44s 10.129.0.92 ci-ln-dcbq7d2-72292-ztrkp-master-1 <none> <none> pod/go-xdp-counter-ds-7hzww 1/1 Running 0 44s 10.130.0.86 ci-ln-dcbq7d2-72292-ztrkp-master-2 <none> <none> pod/go-xdp-counter-ds-qm9zx 1/1 Running 0 44s 10.128.0.101 ci-ln-dcbq7d2-72292-ztrkp-master-0 <none> <none> NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR daemonset.apps/go-xdp-counter-ds 3 3 3 3 3 <none> 44s go-xdp-counter quay.io/bpfman-userspace/go-xdp-counter:v0.5.0 name=go-xdp-counter",
"oc get xdpprogram go-xdp-counter-example",
"NAME BPFFUNCTIONNAME NODESELECTOR STATUS go-xdp-counter-example xdp_stats {} ReconcileSuccess",
"oc logs <pod_name> -n go-xdp-counter",
"2024/08/13 15:20:06 15016 packets received 2024/08/13 15:20:06 93581579 bytes received 2024/08/13 15:20:09 19284 packets received 2024/08/13 15:20:09 99638680 bytes received 2024/08/13 15:20:12 23522 packets received 2024/08/13 15:20:12 105666062 bytes received 2024/08/13 15:20:15 27276 packets received 2024/08/13 15:20:15 112028608 bytes received 2024/08/13 15:20:18 29470 packets received 2024/08/13 15:20:18 112732299 bytes received 2024/08/13 15:20:21 32588 packets received 2024/08/13 15:20:21 113813781 bytes received",
"oc -n external-dns-operator get sub external-dns-operator -o yaml | yq '.status.installplan.name'",
"install-zcvlr",
"oc -n external-dns-operator get ip <install_plan_name> -o yaml | yq '.status.phase'",
"Complete",
"oc get -n external-dns-operator deployment/external-dns-operator",
"NAME READY UP-TO-DATE AVAILABLE AGE external-dns-operator 1/1 1 1 23h",
"oc logs -n external-dns-operator deployment/external-dns-operator -c external-dns-operator",
"time=\"2022-09-02T08:53:57Z\" level=error msg=\"Failure in zone test.example.io. [Id: /hostedzone/Z06988883Q0H0RL6UMXXX]\" time=\"2022-09-02T08:53:57Z\" level=error msg=\"InvalidChangeBatch: [FATAL problem: DomainLabelTooLong (Domain label is too long) encountered with 'external-dns-a-hello-openshift-aaaaaaaaaa-bbbbbbbbbb-ccccccc']\\n\\tstatus code: 400, request id: e54dfd5a-06c6-47b0-bcb9-a4f7c3a4e0c6\"",
"apiVersion: v1 kind: Namespace metadata: name: external-dns-operator",
"oc apply -f namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: external-dns-operator namespace: external-dns-operator spec: upgradeStrategy: Default targetNamespaces: - external-dns-operator",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: external-dns-operator namespace: external-dns-operator spec: channel: stable-v1 installPlanApproval: Automatic name: external-dns-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc apply -f subscription.yaml",
"oc -n external-dns-operator get subscription external-dns-operator --template='{{.status.installplan.name}}{{\"\\n\"}}'",
"oc -n external-dns-operator get ip <install_plan_name> --template='{{.status.phase}}{{\"\\n\"}}'",
"oc -n external-dns-operator get pod",
"NAME READY STATUS RESTARTS AGE external-dns-operator-5584585fd7-5lwqm 2/2 Running 0 11m",
"oc -n external-dns-operator get subscription",
"NAME PACKAGE SOURCE CHANNEL external-dns-operator external-dns-operator redhat-operators stable-v1",
"oc -n external-dns-operator get csv",
"NAME DISPLAY VERSION REPLACES PHASE external-dns-operator.v<1.y.z> ExternalDNS Operator <1.y.z> Succeeded",
"spec: provider: type: AWS 1 aws: credentials: name: aws-access-key 2",
"zones: - \"myzoneid\" 1",
"domains: - filterType: Include 1 matchType: Exact 2 name: \"myzonedomain1.com\" 3 - filterType: Include matchType: Pattern 4 pattern: \".*\\\\.otherzonedomain\\\\.com\" 5",
"source: 1 type: Service 2 service: serviceType: 3 - LoadBalancer - ClusterIP labelFilter: 4 matchLabels: external-dns.mydomain.org/publish: \"yes\" hostnameAnnotation: \"Allow\" 5 fqdnTemplate: - \"{{.Name}}.myzonedomain.com\" 6",
"source: type: OpenShiftRoute 1 openshiftRouteOptions: routerName: default 2 labelFilter: matchLabels: external-dns.mydomain.org/publish: \"yes\"",
"oc whoami",
"system:admin",
"export AWS_ACCESS_KEY_ID=USD(oc get secrets aws-creds -n kube-system --template={{.data.aws_access_key_id}} | base64 -d) export AWS_SECRET_ACCESS_KEY=USD(oc get secrets aws-creds -n kube-system --template={{.data.aws_secret_access_key}} | base64 -d)",
"oc get routes --all-namespaces | grep console",
"openshift-console console console-openshift-console.apps.testextdnsoperator.apacshift.support console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.testextdnsoperator.apacshift.support downloads http edge/Redirect None",
"aws route53 list-hosted-zones | grep testextdnsoperator.apacshift.support",
"HOSTEDZONES terraform /hostedzone/Z02355203TNN1XXXX1J6O testextdnsoperator.apacshift.support. 5",
"cat <<EOF | oc create -f - apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-aws 1 spec: domains: - filterType: Include 2 matchType: Exact 3 name: testextdnsoperator.apacshift.support 4 provider: type: AWS 5 source: 6 type: OpenShiftRoute 7 openshiftRouteOptions: routerName: default 8 EOF",
"aws route53 list-resource-record-sets --hosted-zone-id Z02355203TNN1XXXX1J6O --query \"ResourceRecordSets[?Type == 'CNAME']\" | grep console",
"aws --profile account-a iam get-role --role-name user-rol1 | head -1",
"ROLE arn:aws:iam::1234567890123:role/user-rol1 2023-09-14T17:21:54+00:00 3600 / AROA3SGB2ZRKRT5NISNJN user-rol1",
"aws --profile account-a route53 list-hosted-zones | grep testextdnsoperator.apacshift.support",
"HOSTEDZONES terraform /hostedzone/Z02355203TNN1XXXX1J6O testextdnsoperator.apacshift.support. 5",
"cat <<EOF | oc create -f - apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-aws spec: domains: - filterType: Include matchType: Exact name: testextdnsoperator.apacshift.support provider: type: AWS aws: assumeRole: arn: arn:aws:iam::12345678901234:role/user-rol1 1 source: type: OpenShiftRoute openshiftRouteOptions: routerName: default EOF",
"aws --profile account-a route53 list-resource-record-sets --hosted-zone-id Z02355203TNN1XXXX1J6O --query \"ResourceRecordSets[?Type == 'CNAME']\" | grep console-openshift-console",
"CLIENT_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_id}} | base64 -d) CLIENT_SECRET=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_secret}} | base64 -d) RESOURCE_GROUP=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_resourcegroup}} | base64 -d) SUBSCRIPTION_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_subscription_id}} | base64 -d) TENANT_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_tenant_id}} | base64 -d)",
"az login --service-principal -u \"USD{CLIENT_ID}\" -p \"USD{CLIENT_SECRET}\" --tenant \"USD{TENANT_ID}\"",
"oc get routes --all-namespaces | grep console",
"openshift-console console console-openshift-console.apps.test.azure.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.azure.example.com downloads http edge/Redirect None",
"az network dns zone list --resource-group \"USD{RESOURCE_GROUP}\"",
"apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-azure 1 spec: zones: - \"/subscriptions/1234567890/resourceGroups/test-azure-xxxxx-rg/providers/Microsoft.Network/dnszones/test.azure.example.com\" 2 provider: type: Azure 3 source: openshiftRouteOptions: 4 routerName: default 5 type: OpenShiftRoute 6",
"az network dns record-set list -g \"USD{RESOURCE_GROUP}\" -z test.azure.example.com | grep console",
"oc get secret gcp-credentials -n kube-system --template='{{USDv := index .data \"service_account.json\"}}{{USDv}}' | base64 -d - > decoded-gcloud.json",
"export GOOGLE_CREDENTIALS=decoded-gcloud.json",
"gcloud auth activate-service-account <client_email as per decoded-gcloud.json> --key-file=decoded-gcloud.json",
"gcloud config set project <project_id as per decoded-gcloud.json>",
"oc get routes --all-namespaces | grep console",
"openshift-console console console-openshift-console.apps.test.gcp.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.gcp.example.com downloads http edge/Redirect None",
"gcloud dns managed-zones list | grep test.gcp.example.com",
"qe-cvs4g-private-zone test.gcp.example.com",
"apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-gcp 1 spec: domains: - filterType: Include 2 matchType: Exact 3 name: test.gcp.example.com 4 provider: type: GCP 5 source: openshiftRouteOptions: 6 routerName: default 7 type: OpenShiftRoute 8",
"gcloud dns record-sets list --zone=qe-cvs4g-private-zone | grep console",
"oc -n external-dns-operator create secret generic infoblox-credentials --from-literal=EXTERNAL_DNS_INFOBLOX_WAPI_USERNAME=<infoblox_username> --from-literal=EXTERNAL_DNS_INFOBLOX_WAPI_PASSWORD=<infoblox_password>",
"oc get routes --all-namespaces | grep console",
"openshift-console console console-openshift-console.apps.test.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.example.com downloads http edge/Redirect None",
"apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-infoblox 1 spec: provider: type: Infoblox 2 infoblox: credentials: name: infoblox-credentials gridHost: USD{INFOBLOX_GRID_PUBLIC_IP} wapiPort: 443 wapiVersion: \"2.3.1\" domains: - filterType: Include matchType: Exact name: test.example.com source: type: OpenShiftRoute 3 openshiftRouteOptions: routerName: default 4",
"oc create -f external-dns-sample-infoblox.yaml",
"oc -n external-dns-operator create configmap trusted-ca",
"oc -n external-dns-operator label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true",
"oc -n external-dns-operator patch subscription external-dns-operator --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/config\", \"value\":{\"env\":[{\"name\":\"TRUSTED_CA_CONFIGMAP_NAME\",\"value\":\"trusted-ca\"}]}}]'",
"oc -n external-dns-operator exec deploy/external-dns-operator -c external-dns-operator -- printenv TRUSTED_CA_CONFIGMAP_NAME",
"trusted-ca",
"\"event\":\"ipAllocated\",\"ip\":\"172.22.0.201\",\"msg\":\"IP address assigned by controller",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: metallb-system EOF",
"cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator namespace: metallb-system EOF",
"oc get operatorgroup -n metallb-system",
"NAME AGE metallb-operator 14m",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator-sub namespace: metallb-system spec: channel: stable name: metallb-operator source: redhat-operators 1 sourceNamespace: openshift-marketplace",
"oc create -f metallb-sub.yaml",
"oc label ns metallb-system \"openshift.io/cluster-monitoring=true\"",
"oc get installplan -n metallb-system",
"NAME CSV APPROVAL APPROVED install-wzg94 metallb-operator.4.18.0-nnnnnnnnnnnn Automatic true",
"oc get clusterserviceversion -n metallb-system -o custom-columns=Name:.metadata.name,Phase:.status.phase",
"Name Phase metallb-operator.4.18.0-nnnnnnnnnnnn Succeeded",
"cat << EOF | oc apply -f - apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system EOF",
"oc get deployment -n metallb-system controller",
"NAME READY UP-TO-DATE AVAILABLE AGE controller 1/1 1 1 11m",
"oc get daemonset -n metallb-system speaker",
"NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE speaker 6 6 6 6 6 kubernetes.io/os=linux 18m",
"apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: nodeSelector: 1 node-role.kubernetes.io/worker: \"\" speakerTolerations: 2 - key: \"Example\" operator: \"Exists\" effect: \"NoExecute\"",
"apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority value: 1000000",
"oc apply -f myPriorityClass.yaml",
"apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug controllerConfig: priorityClassName: high-priority 1 affinity: podAffinity: 2 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: metallb topologyKey: kubernetes.io/hostname speakerConfig: priorityClassName: high-priority affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: metallb topologyKey: kubernetes.io/hostname",
"oc apply -f MetalLBPodConfig.yaml",
"oc get pods -n metallb-system -o custom-columns=NAME:.metadata.name,PRIORITY:.spec.priorityClassName",
"NAME PRIORITY controller-584f5c8cd8-5zbvg high-priority metallb-operator-controller-manager-9c8d9985-szkqg <none> metallb-operator-webhook-server-c895594d4-shjgx <none> speaker-dddf7 high-priority",
"oc get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name -n metallb-system",
"apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug controllerConfig: resources: limits: cpu: \"200m\" speakerConfig: resources: limits: cpu: \"300m\"",
"oc apply -f CPULimits.yaml",
"oc describe pod <pod_name>",
"oc get subscription metallb-operator -n metallb-system -o yaml | grep currentCSV",
"currentCSV: metallb-operator.4.10.0-202207051316",
"oc delete subscription metallb-operator -n metallb-system",
"subscription.operators.coreos.com \"metallb-operator\" deleted",
"oc delete clusterserviceversion metallb-operator.4.10.0-202207051316 -n metallb-system",
"clusterserviceversion.operators.coreos.com \"metallb-operator.4.10.0-202207051316\" deleted",
"oc get operatorgroup -n metallb-system",
"NAME AGE metallb-system-7jc66 85m",
"oc get operatorgroup metallb-system-7jc66 -n metallb-system -o yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: \"\" creationTimestamp: \"2023-10-25T09:42:49Z\" generateName: metallb-system- generation: 1 name: metallb-system-7jc66 namespace: metallb-system resourceVersion: \"25027\" uid: f5f644a0-eef8-4e31-a306-e2bbcfaffab3 spec: targetNamespaces: - metallb-system upgradeStrategy: Default status: lastUpdated: \"2023-10-25T09:42:49Z\" namespaces: - metallb-system",
"oc edit n metallb-system",
"operatorgroup.operators.coreos.com/metallb-system-7jc66 edited",
"oc get operatorgroup metallb-system-7jc66 -n metallb-system -o yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: \"\" creationTimestamp: \"2023-10-25T09:42:49Z\" generateName: metallb-system- generation: 2 name: metallb-system-7jc66 namespace: metallb-system resourceVersion: \"61658\" uid: f5f644a0-eef8-4e31-a306-e2bbcfaffab3 spec: upgradeStrategy: Default status: lastUpdated: \"2023-10-25T14:31:30Z\" namespaces: - \"\"",
"oc get namespaces | grep metallb-system",
"metallb-system Active 31m",
"oc get metallb -n metallb-system",
"NAME AGE metallb 33m",
"oc get csv -n metallb-system",
"NAME DISPLAY VERSION REPLACES PHASE metallb-operator.4.18.0-202207051316 MetalLB Operator 4.18.0-202207051316 Succeeded",
"oc get -n openshift-network-operator deployment/network-operator",
"NAME READY UP-TO-DATE AVAILABLE AGE network-operator 1/1 1 1 56m",
"oc get clusteroperator/network",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.16.1 True False False 50m",
"oc describe network.config/cluster",
"Name: cluster Namespace: Labels: <none> Annotations: <none> API Version: config.openshift.io/v1 Kind: Network Metadata: Creation Timestamp: 2024-08-08T11:25:56Z Generation: 3 Resource Version: 29821 UID: 808dd2be-5077-4ff7-b6bb-21b7110126c7 Spec: 1 Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 External IP: Policy: Network Diagnostics: Mode: Source Placement: Target Placement: Network Type: OVNKubernetes Service Network: 172.30.0.0/16 Status: 2 Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Cluster Network MTU: 1360 Conditions: Last Transition Time: 2024-08-08T11:51:50Z Message: Observed Generation: 0 Reason: AsExpected Status: True Type: NetworkDiagnosticsAvailable Network Type: OVNKubernetes Service Network: 172.30.0.0/16 Events: <none>",
"oc describe clusteroperators/network",
"oc get network.operator cluster -o yaml > network-config-backup.yaml",
"oc edit network.operator cluster",
"spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes clusterNetworkMTU: 8900 defaultNetwork: ovnKubernetesConfig: gatewayConfig: ipForwarding: Global",
"oc get clusteroperators network",
"oc patch network.operator cluster -p '{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\":{\"gatewayConfig\":{\"ipForwarding\": \"Global\"}}}}}",
"oc logs --namespace=openshift-network-operator deployment/network-operator",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes clusterNetworkMTU: 8900",
"oc get -n openshift-dns-operator deployment/dns-operator",
"NAME READY UP-TO-DATE AVAILABLE AGE dns-operator 1/1 1 1 23h",
"oc get clusteroperator/dns",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE dns 4.1.15-0.11 True False False 92m",
"oc describe dns.operator/default",
"Name: default Namespace: Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: DNS Status: Cluster Domain: cluster.local 1 Cluster IP: 172.30.0.10 2",
"oc get networks.config/cluster -o jsonpath='{USD.status.serviceNetwork}'",
"[172.30.0.0/16]",
"oc edit dns.operator/default",
"apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: cache: negativeTTL: 0s positiveTTL: 0s logLevel: Normal nodePlacement: {} operatorLogLevel: Normal servers: - name: example-server 1 zones: - example.com 2 forwardPlugin: policy: Random 3 upstreams: 4 - 1.1.1.1 - 2.2.2.2:5353 upstreamResolvers: 5 policy: Random 6 protocolStrategy: \"\" 7 transportConfig: {} 8 upstreams: - type: SystemResolvConf 9 - type: Network address: 1.2.3.4 10 port: 53 11 status: clusterDomain: cluster.local clusterIP: x.y.z.10 conditions:",
"oc describe clusteroperators/dns",
"Status: Conditions: Last Transition Time: <date> Message: DNS \"default\" is available. Reason: AsExpected Status: True Type: Available Last Transition Time: <date> Message: Desired and current number of DNSes are equal Reason: AsExpected Status: False Type: Progressing Last Transition Time: <date> Reason: DNSNotDegraded Status: False Type: Degraded Last Transition Time: <date> Message: DNS default is upgradeable: DNS Operator can be upgraded Reason: DNSUpgradeable Status: True Type: Upgradeable",
"oc logs -n openshift-dns-operator deployment/dns-operator -c dns-operator",
"oc patch dnses.operator.openshift.io/default -p '{\"spec\":{\"logLevel\":\"Debug\"}}' --type=merge",
"oc patch dnses.operator.openshift.io/default -p '{\"spec\":{\"logLevel\":\"Trace\"}}' --type=merge",
"oc get configmap/dns-default -n openshift-dns -o yaml",
"errors log . { class all }",
"oc patch dnses.operator.openshift.io/default -p '{\"spec\":{\"operatorLogLevel\":\"Debug\"}}' --type=merge",
"oc patch dnses.operator.openshift.io/default -p '{\"spec\":{\"operatorLogLevel\":\"Trace\"}}' --type=merge",
"oc get dnses.operator -A -oyaml",
"logLevel: Trace operatorLogLevel: Debug",
"oc logs -n openshift-dns ds/dns-default",
"oc edit dns.operator.openshift.io/default",
"apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: cache: positiveTTL: 1h 1 negativeTTL: 0.5h10m 2",
"get configmap/dns-default -n openshift-dns -o yaml",
"cache 3600 { denial 9984 2400 }",
"patch dns.operator.openshift.io default --type merge --patch '{\"spec\":{\"managementState\":\"Unmanaged\"}}'",
"oc get dns.operator.openshift.io default -ojsonpath='{.spec.managementState}'",
"\"Unmanaged\"",
"oc adm taint nodes <node_name> dns-only=abc:NoExecute 1",
"oc edit dns.operator/default",
"spec: nodePlacement: tolerations: - effect: NoExecute key: \"dns-only\" 1 operator: Equal value: abc tolerationSeconds: 3600 2",
"spec: nodePlacement: nodeSelector: 1 node-role.kubernetes.io/control-plane: \"\"",
"oc edit dns.operator/default",
"apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: servers: - name: example-server 1 zones: - example.com 2 forwardPlugin: transportConfig: transport: TLS 3 tls: caBundle: name: mycacert serverName: dnstls.example.com 4 policy: Random 5 upstreams: 6 - 1.1.1.1 - 2.2.2.2:5353 upstreamResolvers: 7 transportConfig: transport: TLS tls: caBundle: name: mycacert serverName: dnstls.example.com upstreams: - type: Network 8 address: 1.2.3.4 9 port: 53 10",
"oc get configmap/dns-default -n openshift-dns -o yaml",
"apiVersion: v1 data: Corefile: | example.com:5353 { forward . 1.1.1.1 2.2.2.2:5353 } bar.com:5353 example.com:5353 { forward . 3.3.3.3 4.4.4.4:5454 1 } .:5353 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf 1.2.3.4:53 { policy Random } cache 30 reload } kind: ConfigMap metadata: labels: dns.operator.openshift.io/owning-dns: default name: dns-default namespace: openshift-dns",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.openshiftdemos.com",
"nodePlacement: nodeSelector: matchLabels: kubernetes.io/os: linux tolerations: - effect: NoSchedule operator: Exists",
"httpCaptureCookies: - matchType: Exact maxLength: 128 name: MYCOOKIE",
"httpCaptureHeaders: request: - maxLength: 256 name: Connection - maxLength: 128 name: User-Agent response: - maxLength: 256 name: Content-Type - maxLength: 256 name: Content-Length",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: old: {} type: Old",
"oc edit IngressController default -n openshift-ingress-operator",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11",
"oc describe IngressController default -n openshift-ingress-operator",
"Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController Spec: Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom",
"Issuer: C=US, O=Example Inc, CN=Example Global G2 TLS RSA SHA256 2020 CA1 Subject: SOME SIGNED CERT X509v3 CRL Distribution Points: Full Name: URI:http://crl.example.com/example.crl",
"oc create configmap router-ca-certs-default --from-file=ca-bundle.pem=client-ca.crt \\ 1 -n openshift-config",
"oc edit IngressController default -n openshift-ingress-operator",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: clientTLS: clientCertificatePolicy: Required clientCA: name: router-ca-certs-default allowedSubjectPatterns: - \"^/CN=example.com/ST=NC/C=US/O=Security/OU=OpenShiftUSD\"",
"openssl x509 -in custom-cert.pem -noout -subject subject= /CN=example.com/ST=NC/C=US/O=Security/OU=OpenShift",
"oc describe --namespace=openshift-ingress-operator ingresscontroller/default",
"oc describe clusteroperators/ingress",
"oc logs --namespace=openshift-ingress-operator deployments/ingress-operator -c <container_name>",
"oc describe --namespace=openshift-ingress-operator ingresscontroller/<name>",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: <custom_name> 1 namespace: openshift-ingress-operator spec: defaultCertificate: name: <custom-ingress-custom-certs> 2 replicas: 1 3 domain: <custom_domain> 4",
"oc create -f custom-ingress-controller.yaml",
"oc --namespace openshift-ingress-operator get ingresscontrollers",
"NAME AGE default 10m",
"oc --namespace openshift-ingress create secret tls custom-certs-default --cert=tls.crt --key=tls.key",
"oc patch --type=merge --namespace openshift-ingress-operator ingresscontrollers/default --patch '{\"spec\":{\"defaultCertificate\":{\"name\":\"custom-certs-default\"}}}'",
"echo Q | openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | openssl x509 -noout -subject -issuer -enddate",
"subject=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = *.apps.example.com issuer=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = example.com notAfter=May 10 08:32:45 2022 GM",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: defaultCertificate: name: custom-certs-default",
"oc patch -n openshift-ingress-operator ingresscontrollers/default --type json -p USD'- op: remove\\n path: /spec/defaultCertificate'",
"echo Q | openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | openssl x509 -noout -subject -issuer -enddate",
"subject=CN = *.apps.<domain> issuer=CN = ingress-operator@1620633373 notAfter=May 10 10:44:36 2023 GMT",
"oc create -n openshift-ingress-operator serviceaccount thanos && oc describe -n openshift-ingress-operator serviceaccount thanos",
"Name: thanos Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-kfvf2 Mountable secrets: thanos-dockercfg-kfvf2 Tokens: <none> Events: <none>",
"oc apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: thanos-token namespace: openshift-ingress-operator annotations: kubernetes.io/service-account.name: thanos type: kubernetes.io/service-account-token EOF",
"oc apply -f - <<EOF apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: keda-trigger-auth-prometheus namespace: openshift-ingress-operator spec: secretTargetRef: - parameter: bearerToken name: thanos-token key: token - parameter: ca name: thanos-token key: ca.crt EOF",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader namespace: openshift-ingress-operator rules: - apiGroups: - \"\" resources: - pods - nodes verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch - apiGroups: - \"\" resources: - namespaces verbs: - get",
"oc apply -f thanos-metrics-reader.yaml",
"oc adm policy -n openshift-ingress-operator add-role-to-user thanos-metrics-reader -z thanos --role-namespace=openshift-ingress-operator",
"oc adm policy -n openshift-ingress-operator add-cluster-role-to-user cluster-monitoring-view -z thanos",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: ingress-scaler namespace: openshift-ingress-operator spec: scaleTargetRef: 1 apiVersion: operator.openshift.io/v1 kind: IngressController name: default envSourceContainerName: ingress-operator minReplicaCount: 1 maxReplicaCount: 20 2 cooldownPeriod: 1 pollingInterval: 1 triggers: - type: prometheus metricType: AverageValue metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 3 namespace: openshift-ingress-operator 4 metricName: 'kube-node-role' threshold: '1' query: 'sum(kube_node_role{role=\"worker\",service=\"kube-state-metrics\"})' 5 authModes: \"bearer\" authenticationRef: name: keda-trigger-auth-prometheus",
"oc apply -f ingress-autoscaler.yaml",
"oc get -n openshift-ingress-operator ingresscontroller/default -o yaml | grep replicas:",
"replicas: 3",
"oc get pods -n openshift-ingress",
"NAME READY STATUS RESTARTS AGE router-default-7b5df44ff-l9pmm 2/2 Running 0 17h router-default-7b5df44ff-s5sl5 2/2 Running 0 3d22h router-default-7b5df44ff-wwsth 2/2 Running 0 66s",
"oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}'",
"2",
"oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{\"spec\":{\"replicas\": 3}}' --type=merge",
"ingresscontroller.operator.openshift.io/default patched",
"oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}'",
"3",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 3 1",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Container",
"oc -n openshift-ingress logs deployment.apps/router-default -c logs",
"2020-05-11T19:11:50.135710+00:00 router-default-57dfc6cd95-bpmk6 router-default-57dfc6cd95-bpmk6 haproxy[108]: 174.19.21.82:39654 [11/May/2020:19:11:50.133] public be_http:hello-openshift:hello-openshift/pod:hello-openshift:hello-openshift:10.128.2.12:8080 0/0/1/0/1 200 142 - - --NI 1/1/0/0/0 0/0 \"GET / HTTP/1.1\"",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514 httpLogFormat: '%ci:%cp [%t] %ft %b/%s %B %bq %HM %HU %HV'",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: null",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 maxLength: 4096 port: 10514",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Container container: maxLength: 8192",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{\"spec\":{\"tuningOptions\": {\"threadCount\": 8}}}'",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal 3",
"oc create -f <name>-ingress-controller.yaml 1",
"oc --all-namespaces=true get ingresscontrollers",
"oc -n openshift-ingress-operator edit ingresscontroller/default",
"spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal type: LoadBalancerService",
"oc -n openshift-ingress edit svc/router-default -o yaml",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{\"spec\":{\"tuningOptions\": {\"healthCheckInterval\": \"8s\"}}}'",
"oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF",
"oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{\"spec\":{\"routeAdmission\":{\"namespaceOwnership\":\"InterNamespaceAllowed\"}}}' --type=merge",
"spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"oc edit IngressController",
"spec: routeAdmission: wildcardPolicy: WildcardsDisallowed # or WildcardsAllowed",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: httpHeaders: actions: response: - name: X-Frame-Options action: type: Set set: value: DENY",
"apiVersion: route.openshift.io/v1 kind: Route spec: httpHeaders: actions: response: - name: X-Frame-Options action: type: Set set: value: SAMEORIGIN",
"frontend public http-response set-header X-Frame-Options 'DENY' frontend fe_sni http-response set-header X-Frame-Options 'DENY' frontend fe_no_sni http-response set-header X-Frame-Options 'DENY' backend be_secure:openshift-monitoring:alertmanager-main http-response set-header X-Frame-Options 'SAMEORIGIN'",
"oc -n openshift-ingress-operator edit ingresscontroller/default",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: actions: 1 request: 2 - name: X-Forwarded-Client-Cert 3 action: type: Set 4 set: value: \"%{+Q}[ssl_c_der,base64]\" 5 - name: X-SSL-Client-Der action: type: Delete",
"oc edit IngressController",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: forwardedHeaderPolicy: Append",
"oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true 1",
"oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster annotations: ingress.operator.openshift.io/default-enable-http2: \"true\"",
"oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=false 1",
"oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=false",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster annotations: ingress.operator.openshift.io/default-enable-http2: \"false\"",
"oc -n openshift-ingress-operator edit ingresscontroller/default",
"spec: endpointPublishingStrategy: hostNetwork: protocol: PROXY type: HostNetwork",
"spec: endpointPublishingStrategy: nodePort: protocol: PROXY type: NodePortService",
"spec: endpointPublishingStrategy: private: protocol: PROXY type: Private",
"oc edit ingresses.config/cluster -o yaml",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.example.com 1 appsDomain: <test.example.com> 2",
"oc expose service hello-openshift route.route.openshift.io/hello-openshift exposed",
"oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hello-openshift hello_openshift-<my_project>.test.example.com hello-openshift 8080-tcp None",
"oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{\"spec\":{\"httpHeaders\":{\"headerNameCaseAdjustments\":[\"Host\"]}}}'",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/h1-adjust-case: true 1 name: <application_name> namespace: <application_name>",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: headerNameCaseAdjustments: - Host",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/h1-adjust-case: true 1 name: my-application namespace: my-application spec: to: kind: Service name: my-application",
"oc edit -n openshift-ingress-operator ingresscontrollers/default",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpCompression: mimeTypes: - \"text/html\" - \"text/css; charset=utf-8\" - \"application/json\"",
"oc get pods -n openshift-ingress",
"NAME READY STATUS RESTARTS AGE router-default-76bfffb66c-46qwp 1/1 Running 0 11h",
"oc rsh <router_pod_name> cat metrics-auth/statsUsername",
"oc rsh <router_pod_name> cat metrics-auth/statsPassword",
"oc describe pod <router_pod>",
"curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics",
"curl -u user:password https://<router_IP>:<stats_port>/metrics -k",
"curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics",
"HELP haproxy_backend_connections_total Total number of connections. TYPE haproxy_backend_connections_total gauge haproxy_backend_connections_total{backend=\"http\",namespace=\"default\",route=\"hello-route\"} 0 haproxy_backend_connections_total{backend=\"http\",namespace=\"default\",route=\"hello-route-alt\"} 0 haproxy_backend_connections_total{backend=\"http\",namespace=\"default\",route=\"hello-route01\"} 0 HELP haproxy_exporter_server_threshold Number of servers tracked and the current threshold value. TYPE haproxy_exporter_server_threshold gauge haproxy_exporter_server_threshold{type=\"current\"} 11 haproxy_exporter_server_threshold{type=\"limit\"} 500 HELP haproxy_frontend_bytes_in_total Current total of incoming bytes. TYPE haproxy_frontend_bytes_in_total gauge haproxy_frontend_bytes_in_total{frontend=\"fe_no_sni\"} 0 haproxy_frontend_bytes_in_total{frontend=\"fe_sni\"} 0 haproxy_frontend_bytes_in_total{frontend=\"public\"} 119070 HELP haproxy_server_bytes_in_total Current total of incoming bytes. TYPE haproxy_server_bytes_in_total gauge haproxy_server_bytes_in_total{namespace=\"\",pod=\"\",route=\"\",server=\"fe_no_sni\",service=\"\"} 0 haproxy_server_bytes_in_total{namespace=\"\",pod=\"\",route=\"\",server=\"fe_sni\",service=\"\"} 0 haproxy_server_bytes_in_total{namespace=\"default\",pod=\"docker-registry-5-nk5fz\",route=\"docker-registry\",server=\"10.130.0.89:5000\",service=\"docker-registry\"} 0 haproxy_server_bytes_in_total{namespace=\"default\",pod=\"hello-rc-vkjqx\",route=\"hello-route\",server=\"10.130.0.90:8080\",service=\"hello-svc-1\"} 0",
"http://<user>:<password>@<router_IP>:<stats_port>",
"http://<user>:<password>@<router_ip>:1936/metrics;csv",
"oc -n openshift-config create configmap my-custom-error-code-pages --from-file=error-page-503.http --from-file=error-page-404.http",
"oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{\"spec\":{\"httpErrorCodePages\":{\"name\":\"my-custom-error-code-pages\"}}}' --type=merge",
"oc get cm default-errorpages -n openshift-ingress",
"NAME DATA AGE default-errorpages 2 25s 1",
"oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-503.http",
"oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-404.http",
"oc new-project test-ingress",
"oc new-app django-psql-example",
"curl -vk <route_hostname>",
"curl -vk <route_hostname>",
"oc -n openshift-ingress rsh <router> cat /var/lib/haproxy/conf/haproxy.config | grep errorfile",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{\"spec\":{\"tuningOptions\": {\"maxConnections\": 7500}}}'",
"cat << EOF| oc create -f - apiVersion: v1 kind: Namespace metadata: labels: pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/enforce-version: v1.24 name: openshift-ingress-node-firewall EOF",
"cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ingress-node-firewall-operators namespace: openshift-ingress-node-firewall EOF",
"cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ingress-node-firewall-sub namespace: openshift-ingress-node-firewall spec: name: ingress-node-firewall channel: stable source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc get ip -n openshift-ingress-node-firewall",
"NAME CSV APPROVAL APPROVED install-5cvnz ingress-node-firewall.4.18.0-202211122336 Automatic true",
"oc get csv -n openshift-ingress-node-firewall",
"NAME DISPLAY VERSION REPLACES PHASE ingress-node-firewall.4.18.0-202211122336 Ingress Node Firewall Operator 4.18.0-202211122336 ingress-node-firewall.4.18.0-202211102047 Succeeded",
"oc annotate ns/openshift-ingress-node-firewall workload.openshift.io/allowed=management",
"oc apply -f rule.yaml",
"spec: nodeSelector: node-role.kubernetes.io/worker: \"\"",
"apiVersion: ingressnodefirewall.openshift.io/v1alpha1 kind: IngressNodeFirewallConfig metadata: name: ingressnodefirewallconfig namespace: openshift-ingress-node-firewall spec: nodeSelector: node-role.kubernetes.io/worker: \"\"",
"apiVersion: ingressnodefirewall.openshift.io/v1alpha1 kind: IngressNodeFirewall metadata: name: ingressnodefirewall spec: interfaces: - eth0 nodeSelector: matchLabels: <ingress_firewall_label_name>: <label_value> 1 ingress: - sourceCIDRs: - 172.16.0.0/12 rules: - order: 10 protocolConfig: protocol: ICMP icmp: icmpType: 8 #ICMP Echo request action: Deny - order: 20 protocolConfig: protocol: TCP tcp: ports: \"8000-9000\" action: Deny - sourceCIDRs: - fc00:f853:ccd:e793::0/64 rules: - order: 10 protocolConfig: protocol: ICMPv6 icmpv6: icmpType: 128 #ICMPV6 Echo request action: Deny",
"apiVersion: ingressnodefirewall.openshift.io/v1alpha1 kind: IngressNodeFirewall metadata: name: ingressnodefirewall-zero-trust spec: interfaces: - eth1 1 nodeSelector: matchLabels: <ingress_firewall_label_name>: <label_value> 2 ingress: - sourceCIDRs: - 0.0.0.0/0 3 rules: - order: 10 protocolConfig: protocol: TCP tcp: ports: 22 action: Allow - order: 20 action: Deny 4",
"oc label namespace openshift-ingress-node-firewall pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/warn=privileged --overwrite",
"apiVersion: ingressnodefirewall.openshift.io/v1alpha1 kind: IngressNodeFirewallConfig metadata: name: ingressnodefirewallconfig namespace: openshift-ingress-node-firewall spec: nodeSelector: node-role.kubernetes.io/worker: \"\" ebpfProgramManagerMode: <ebpf_mode>",
"oc get ingressnodefirewall",
"oc get <resource> <name> -o yaml",
"oc get crds | grep ingressnodefirewall",
"NAME READY UP-TO-DATE AVAILABLE AGE ingressnodefirewallconfigs.ingressnodefirewall.openshift.io 2022-08-25T10:03:01Z ingressnodefirewallnodestates.ingressnodefirewall.openshift.io 2022-08-25T10:03:00Z ingressnodefirewalls.ingressnodefirewall.openshift.io 2022-08-25T10:03:00Z",
"oc get pods -n openshift-ingress-node-firewall",
"NAME READY STATUS RESTARTS AGE ingress-node-firewall-controller-manager 2/2 Running 0 5d21h ingress-node-firewall-daemon-pqx56 3/3 Running 0 5d21h",
"oc adm must-gather - gather_ingress_node_firewall",
"cat << EOF| oc create -f - apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management EOF",
"cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator EOF",
"cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: stable name: sriov-network-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"cat <<EOF | oc create -f - apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: enableInjector: true enableOperatorWebhook: true logLevel: 2 disableDrain: false EOF",
"oc get csv -n openshift-sriov-network-operator -o custom-columns=Name:.metadata.name,Phase:.status.phase",
"Name Phase sriov-network-operator.4.18.0-202406131906 Succeeded",
"oc annotate ns/openshift-sriov-network-operator workload.openshift.io/allowed=management",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: disableDrain: false enableInjector: true enableOperatorWebhook: true logLevel: 2 featureGates: metricsExporter: false",
"oc apply -f sriovOperatorConfig.yaml",
"oc get pods -n openshift-sriov-network-operator",
"NAME READY STATUS RESTARTS AGE network-resources-injector-5cz5p 1/1 Running 0 10m network-resources-injector-dwqpx 1/1 Running 0 10m network-resources-injector-lktz5 1/1 Running 0 10m",
"oc patch sriovoperatorconfig default --type=merge -n openshift-sriov-network-operator --patch '{ \"spec\": { \"enableInjector\": <value> } }'",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: enableInjector: <value>",
"oc get pods -n openshift-sriov-network-operator",
"NAME READY STATUS RESTARTS AGE operator-webhook-9jkw6 1/1 Running 0 16m operator-webhook-kbr5p 1/1 Running 0 16m operator-webhook-rpfrl 1/1 Running 0 16m",
"oc patch sriovoperatorconfig default --type=merge -n openshift-sriov-network-operator --patch '{ \"spec\": { \"enableOperatorWebhook\": <value> } }'",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: enableOperatorWebhook: <value>",
"oc patch sriovoperatorconfig default --type=json -n openshift-sriov-network-operator --patch '[{ \"op\": \"replace\", \"path\": \"/spec/configDaemonNodeSelector\", \"value\": {<node_label>} }]'",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: <node_label>",
"oc patch sriovoperatorconfig default --type=merge -n openshift-sriov-network-operator --patch '{ \"spec\": { \"disableDrain\": true, \"configDaemonNodeSelector\": { \"node-role.kubernetes.io/master\": \"\" } } }'",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: disableDrain: true configDaemonNodeSelector: node-role.kubernetes.io/master: \"\"",
"apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subsription namespace: openshift-sriov-network-operator spec: channel: stable name: sriov-network-operator config: nodeSelector: node-role.kubernetes.io/worker: \"\" source: s/qe-app-registry/redhat-operators sourceNamespace: openshift-marketplace",
"oc get csv -n openshift-sriov-network-operator",
"NAME DISPLAY VERSION REPLACES PHASE sriov-network-operator.4.18.0-202211021237 SR-IOV Network Operator 4.18.0-202211021237 sriov-network-operator.4.18.0-202210290517 Succeeded",
"oc get pods -n openshift-sriov-network-operator",
"(sriov_vf_tx_packets * on (pciAddr,node) group_left(pod,namespace) sriov_kubepoddevice) * on (pod,namespace) group_left (label_app_kubernetes_io_name) kube_pod_labels",
"oc label ns/openshift-sriov-network-operator openshift.io/cluster-monitoring=true",
"oc patch -n openshift-sriov-network-operator sriovoperatorconfig/default --type='merge' -p='{\"spec\": {\"featureGates\": {\"metricsExporter\": true}}}'",
"oc get pods -n openshift-sriov-network-operator",
"NAME READY STATUS RESTARTS AGE operator-webhook-hzfg4 1/1 Running 0 5d22h sriov-network-config-daemon-tr54m 1/1 Running 0 5d22h sriov-network-metrics-exporter-z5d7t 1/1 Running 0 10s sriov-network-operator-cc6fd88bc-9bsmt 1/1 Running 0 5d22h",
"oc delete sriovnetwork -n openshift-sriov-network-operator --all",
"oc delete sriovnetworknodepolicy -n openshift-sriov-network-operator --all",
"oc delete sriovibnetwork -n openshift-sriov-network-operator --all",
"oc delete sriovoperatorconfigs -n openshift-sriov-network-operator --all",
"oc delete crd sriovibnetworks.sriovnetwork.openshift.io",
"oc delete crd sriovnetworknodepolicies.sriovnetwork.openshift.io",
"oc delete crd sriovnetworknodestates.sriovnetwork.openshift.io",
"oc delete crd sriovnetworkpoolconfigs.sriovnetwork.openshift.io",
"oc delete crd sriovnetworks.sriovnetwork.openshift.io",
"oc delete crd sriovoperatorconfigs.sriovnetwork.openshift.io",
"oc delete namespace openshift-sriov-network-operator",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: sample-anp-deny-pass-rules 1 spec: priority: 50 2 subject: namespaces: matchLabels: kubernetes.io/metadata.name: example.name 3 ingress: 4 - name: \"deny-all-ingress-tenant-1\" 5 action: \"Deny\" from: - pods: namespaceSelector: matchLabels: custom-anp: tenant-1 podSelector: matchLabels: custom-anp: tenant-1 6 egress: 7 - name: \"pass-all-egress-to-tenant-1\" action: \"Pass\" to: - pods: namespaceSelector: matchLabels: custom-anp: tenant-1 podSelector: matchLabels: custom-anp: tenant-1",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: allow-monitoring spec: priority: 9 subject: namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well. ingress: - name: \"allow-ingress-from-monitoring\" action: \"Allow\" from: - namespaces: matchLabels: kubernetes.io/metadata.name: monitoring",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: block-monitoring spec: priority: 5 subject: namespaces: matchLabels: security: restricted ingress: - name: \"deny-ingress-from-monitoring\" action: \"Deny\" from: - namespaces: matchLabels: kubernetes.io/metadata.name: monitoring",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: pass-monitoring spec: priority: 7 subject: namespaces: matchLabels: security: internal ingress: - name: \"pass-ingress-from-monitoring\" action: \"Pass\" from: - namespaces: matchLabels: kubernetes.io/metadata.name: monitoring",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: name: default 1 spec: subject: namespaces: matchLabels: kubernetes.io/metadata.name: example.name 2 ingress: 3 - name: \"deny-all-ingress-from-tenant-1\" 4 action: \"Deny\" from: - pods: namespaceSelector: matchLabels: custom-banp: tenant-1 5 podSelector: matchLabels: custom-banp: tenant-1 6 egress: - name: \"allow-all-egress-to-tenant-1\" action: \"Allow\" to: - pods: namespaceSelector: matchLabels: custom-banp: tenant-1 podSelector: matchLabels: custom-banp: tenant-1",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: name: default spec: subject: namespaces: matchLabels: security: internal ingress: - name: \"deny-ingress-from-monitoring\" action: \"Deny\" from: - namespaces: matchLabels: kubernetes.io/metadata.name: monitoring",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-monitoring namespace: tenant 1 spec: podSelector: policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: egress-security-allow spec: egress: - action: Deny to: - nodes: matchExpressions: - key: node-role.kubernetes.io/worker operator: Exists - action: Allow name: allow-to-kubernetes-api-server-and-engr-dept-pods ports: - portNumber: port: 6443 protocol: TCP to: - nodes: 1 matchExpressions: - key: node-role.kubernetes.io/control-plane operator: Exists - pods: 2 namespaceSelector: matchLabels: dept: engr podSelector: {} priority: 55 subject: 3 namespaces: matchExpressions: - key: security 4 operator: In values: - restricted - confidential - internal",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: network-as-egress-peer spec: priority: 70 subject: namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well. egress: - name: \"deny-egress-to-external-dns-servers\" action: \"Deny\" to: - networks: 1 - 8.8.8.8/32 - 8.8.4.4/32 - 208.67.222.222/32 ports: - portNumber: protocol: UDP port: 53 - name: \"allow-all-egress-to-intranet\" action: \"Allow\" to: - networks: 2 - 89.246.180.0/22 - 60.45.72.0/22 - name: \"allow-all-intra-cluster-traffic\" action: \"Allow\" to: - namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well. - name: \"pass-all-egress-to-internet\" action: \"Pass\" to: - networks: - 0.0.0.0/0 3 --- apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: name: default spec: subject: namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well. egress: - name: \"deny-all-egress-to-internet\" action: \"Deny\" to: - networks: - 0.0.0.0/0 4 ---",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: egress-peer-1 1 spec: egress: 2 - action: \"Allow\" name: \"allow-egress\" to: - nodes: matchExpressions: - key: worker-group operator: In values: - workloads # Egress traffic from nodes with label worker-group: workloads is allowed. - networks: - 104.154.164.170/32 - pods: namespaceSelector: matchLabels: apps: external-apps podSelector: matchLabels: app: web # This rule in the policy allows the traffic directed to pods labeled apps: web in projects with apps: external-apps to leave the cluster. - action: \"Deny\" name: \"deny-egress\" to: - nodes: matchExpressions: - key: worker-group operator: In values: - infra # Egress traffic from nodes with label worker-group: infra is denied. - networks: - 104.154.164.160/32 # Egress traffic to this IP address from cluster is denied. - pods: namespaceSelector: matchLabels: apps: internal-apps podSelector: {} - action: \"Pass\" name: \"pass-egress\" to: - nodes: matchExpressions: - key: node-role.kubernetes.io/worker operator: Exists # All other egress traffic is passed to NetworkPolicy or BANP for evaluation. priority: 30 3 subject: 4 namespaces: matchLabels: apps: all-apps",
"Conditions: Last Transition Time: 2024-06-08T20:29:00Z Message: Setting up OVN DB plumbing was successful Reason: SetupSucceeded Status: True Type: Ready-In-Zone-ovn-control-plane Last Transition Time: 2024-06-08T20:29:00Z Message: Setting up OVN DB plumbing was successful Reason: SetupSucceeded Status: True Type: Ready-In-Zone-ovn-worker Last Transition Time: 2024-06-08T20:29:00Z Message: Setting up OVN DB plumbing was successful Reason: SetupSucceeded Status: True Type: Ready-In-Zone-ovn-worker2",
"Status: Conditions: Last Transition Time: 2024-06-25T12:47:44Z Message: error attempting to add ANP cluster-control with priority 600 because, OVNK only supports priority ranges 0-99 Reason: SetupFailed Status: False Type: Ready-In-Zone-example-worker-1.example.example-org.net Last Transition Time: 2024-06-25T12:47:45Z Message: error attempting to add ANP cluster-control with priority 600 because, OVNK only supports priority ranges 0-99 Reason: SetupFailed Status: False Type: Ready-In-Zone-example-worker-0.example.example-org.net Last Transition Time: 2024-06-25T12:47:44Z Message: error attempting to add ANP cluster-control with priority 600 because, OVNK only supports priority ranges 0-99 Reason: SetupFailed Status: False Type: Ready-In-Zone-example-ctlplane-1.example.example-org.net Last Transition Time: 2024-06-25T12:47:44Z Message: error attempting to add ANP cluster-control with priority 600 because, OVNK only supports priority ranges 0-99 Reason: SetupFailed Status: False Type: Ready-In-Zone-example-ctlplane-2.example.example-org.net Last Transition Time: 2024-06-25T12:47:44Z Message: error attempting to add ANP cluster-control with priority 600 because, OVNK only supports priority ranges 0-99 Reason: SetupFailed Status: False Type: Ready-In-Zone-example-ctlplane-0.example.example-org.net ```",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: cluster-control spec: priority: 34 subject: namespaces: matchLabels: anp: cluster-control-anp # Only namespaces with this label have this ANP ingress: - name: \"allow-from-ingress-router\" # rule0 action: \"Allow\" from: - namespaces: matchLabels: policy-group.network.openshift.io/ingress: \"\" - name: \"allow-from-monitoring\" # rule1 action: \"Allow\" from: - namespaces: matchLabels: kubernetes.io/metadata.name: openshift-monitoring ports: - portNumber: protocol: TCP port: 7564 - namedPort: \"scrape\" - name: \"allow-from-open-tenants\" # rule2 action: \"Allow\" from: - namespaces: # open tenants matchLabels: tenant: open - name: \"pass-from-restricted-tenants\" # rule3 action: \"Pass\" from: - namespaces: # restricted tenants matchLabels: tenant: restricted - name: \"default-deny\" # rule4 action: \"Deny\" from: - namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well. egress: - name: \"allow-to-dns\" # rule0 action: \"Allow\" to: - pods: namespaceSelector: matchlabels: kubernetes.io/metadata.name: openshift-dns podSelector: matchlabels: app: dns ports: - portNumber: protocol: UDP port: 5353 - name: \"allow-to-kapi-server\" # rule1 action: \"Allow\" to: - nodes: matchExpressions: - key: node-role.kubernetes.io/control-plane operator: Exists ports: - portNumber: protocol: TCP port: 6443 - name: \"allow-to-splunk\" # rule2 action: \"Allow\" to: - namespaces: matchlabels: tenant: splunk ports: - portNumber: protocol: TCP port: 8991 - portNumber: protocol: TCP port: 8992 - name: \"allow-to-open-tenants-and-intranet-and-worker-nodes\" # rule3 action: \"Allow\" to: - nodes: # worker-nodes matchExpressions: - key: node-role.kubernetes.io/worker operator: Exists - networks: # intranet - 172.29.0.0/30 - 10.0.54.0/19 - 10.0.56.38/32 - 10.0.69.0/24 - namespaces: # open tenants matchLabels: tenant: open - name: \"pass-to-restricted-tenants\" # rule4 action: \"Pass\" to: - namespaces: # restricted tenants matchLabels: tenant: restricted - name: \"default-deny\" action: \"Deny\" to: - networks: - 0.0.0.0/0",
"oc get pods -n openshift-ovn-kubernetes -owide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-control-plane-5c95487779-8k9fd 2/2 Running 0 34m 10.0.0.5 ci-ln-0tv5gg2-72292-6sjw5-master-0 <none> <none> ovnkube-control-plane-5c95487779-v2xn8 2/2 Running 0 34m 10.0.0.3 ci-ln-0tv5gg2-72292-6sjw5-master-1 <none> <none> ovnkube-node-524dt 8/8 Running 0 33m 10.0.0.4 ci-ln-0tv5gg2-72292-6sjw5-master-2 <none> <none> ovnkube-node-gbwr9 8/8 Running 0 24m 10.0.128.4 ci-ln-0tv5gg2-72292-6sjw5-worker-c-s9gqt <none> <none> ovnkube-node-h4fpx 8/8 Running 0 33m 10.0.0.5 ci-ln-0tv5gg2-72292-6sjw5-master-0 <none> <none> ovnkube-node-j4hzw 8/8 Running 0 24m 10.0.128.2 ci-ln-0tv5gg2-72292-6sjw5-worker-a-hzbh5 <none> <none> ovnkube-node-wdhgv 8/8 Running 0 33m 10.0.0.3 ci-ln-0tv5gg2-72292-6sjw5-master-1 <none> <none> ovnkube-node-wfncn 8/8 Running 0 24m 10.0.128.3 ci-ln-0tv5gg2-72292-6sjw5-worker-b-5bb7f <none> <none>",
"oc rsh -c nbdb -n openshift-ovn-kubernetes ovnkube-node-524dt",
"ovn-nbctl find ACL 'external_ids{>=}{\"k8s.ovn.org/owner-type\"=AdminNetworkPolicy,\"k8s.ovn.org/name\"=cluster-control}'",
"_uuid : 0d5e4722-b608-4bb1-b625-23c323cc9926 action : allow-related direction : to-lport external_ids : {direction=Ingress, gress-index=\"2\", \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:2:None\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy, port-policy-protocol=None} label : 0 log : false match : \"outport == @a14645450421485494999 && ((ip4.src == USDa13730899355151937870))\" meter : acl-logging name : \"ANP:cluster-control:Ingress:2\" options : {} priority : 26598 severity : [] tier : 1 _uuid : b7be6472-df67-439c-8c9c-f55929f0a6e0 action : drop direction : from-lport external_ids : {direction=Egress, gress-index=\"5\", \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:5:None\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy, port-policy-protocol=None} label : 0 log : false match : \"inport == @a14645450421485494999 && ((ip4.dst == USDa11452480169090787059))\" meter : acl-logging name : \"ANP:cluster-control:Egress:5\" options : {apply-after-lb=\"true\"} priority : 26595 severity : [] tier : 1 _uuid : 5a6e5bb4-36eb-4209-b8bc-c611983d4624 action : pass direction : to-lport external_ids : {direction=Ingress, gress-index=\"3\", \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:3:None\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy, port-policy-protocol=None} label : 0 log : false match : \"outport == @a14645450421485494999 && ((ip4.src == USDa764182844364804195))\" meter : acl-logging name : \"ANP:cluster-control:Ingress:3\" options : {} priority : 26597 severity : [] tier : 1 _uuid : 04f20275-c410-405c-a923-0e677f767889 action : pass direction : from-lport external_ids : {direction=Egress, gress-index=\"4\", \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:4:None\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy, port-policy-protocol=None} label : 0 log : false match : \"inport == @a14645450421485494999 && ((ip4.dst == USDa5972452606168369118))\" meter : acl-logging name : \"ANP:cluster-control:Egress:4\" options : {apply-after-lb=\"true\"} priority : 26596 severity : [] tier : 1 _uuid : 4b5d836a-e0a3-4088-825e-f9f0ca58e538 action : drop direction : to-lport external_ids : {direction=Ingress, gress-index=\"4\", \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:4:None\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy, port-policy-protocol=None} label : 0 log : false match : \"outport == @a14645450421485494999 && ((ip4.src == USDa13814616246365836720))\" meter : acl-logging name : \"ANP:cluster-control:Ingress:4\" options : {} priority : 26596 severity : [] tier : 1 _uuid : 5d09957d-d2cc-4f5a-9ddd-b97d9d772023 action : allow-related direction : from-lport external_ids : {direction=Egress, gress-index=\"2\", \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:2:tcp\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy, port-policy-protocol=tcp} label : 0 log : false match : \"inport == @a14645450421485494999 && ((ip4.dst == USDa18396736153283155648)) && tcp && tcp.dst=={8991,8992}\" meter : acl-logging name : \"ANP:cluster-control:Egress:2\" options : {apply-after-lb=\"true\"} priority : 26598 severity : [] tier : 1 _uuid : 1a68a5ed-e7f9-47d0-b55c-89184d97e81a action : allow-related direction : from-lport external_ids : {direction=Egress, gress-index=\"1\", \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:1:tcp\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy, port-policy-protocol=tcp} label : 0 log : false match : \"inport == @a14645450421485494999 && ((ip4.dst == USDa10706246167277696183)) && tcp && tcp.dst==6443\" meter : acl-logging name : \"ANP:cluster-control:Egress:1\" options : {apply-after-lb=\"true\"} priority : 26599 severity : [] tier : 1 _uuid : aa1a224d-7960-4952-bdfb-35246bafbac8 action : allow-related direction : to-lport external_ids : {direction=Ingress, gress-index=\"1\", \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:1:tcp\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy, port-policy-protocol=tcp} label : 0 log : false match : \"outport == @a14645450421485494999 && ((ip4.src == USDa6786643370959569281)) && tcp && tcp.dst==7564\" meter : acl-logging name : \"ANP:cluster-control:Ingress:1\" options : {} priority : 26599 severity : [] tier : 1 _uuid : 1a27d30e-3f96-4915-8ddd-ade7f22c117b action : allow-related direction : from-lport external_ids : {direction=Egress, gress-index=\"3\", \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:3:None\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy, port-policy-protocol=None} label : 0 log : false match : \"inport == @a14645450421485494999 && ((ip4.dst == USDa10622494091691694581))\" meter : acl-logging name : \"ANP:cluster-control:Egress:3\" options : {apply-after-lb=\"true\"} priority : 26597 severity : [] tier : 1 _uuid : b23a087f-08f8-4225-8c27-4a9a9ee0c407 action : allow-related direction : from-lport external_ids : {direction=Egress, gress-index=\"0\", \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:0:udp\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy, port-policy-protocol=udp} label : 0 log : false match : \"inport == @a14645450421485494999 && ((ip4.dst == USDa13517855690389298082)) && udp && udp.dst==5353\" meter : acl-logging name : \"ANP:cluster-control:Egress:0\" options : {apply-after-lb=\"true\"} priority : 26600 severity : [] tier : 1 _uuid : d14ed5cf-2e06-496e-8cae-6b76d5dd5ccd action : allow-related direction : to-lport external_ids : {direction=Ingress, gress-index=\"0\", \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:0:None\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy, port-policy-protocol=None} label : 0 log : false match : \"outport == @a14645450421485494999 && ((ip4.src == USDa14545668191619617708))\" meter : acl-logging name : \"ANP:cluster-control:Ingress:0\" options : {} priority : 26600 severity : [] tier : 1",
"ovn-nbctl find ACL 'external_ids{>=}{\"k8s.ovn.org/owner-type\"=AdminNetworkPolicy,direction=Ingress,\"k8s.ovn.org/name\"=cluster-control,gress-index=\"1\"}'",
"_uuid : aa1a224d-7960-4952-bdfb-35246bafbac8 action : allow-related direction : to-lport external_ids : {direction=Ingress, gress-index=\"1\", \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:1:tcp\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy, port-policy-protocol=tcp} label : 0 log : false match : \"outport == @a14645450421485494999 && ((ip4.src == USDa6786643370959569281)) && tcp && tcp.dst==7564\" meter : acl-logging name : \"ANP:cluster-control:Ingress:1\" options : {} priority : 26599 severity : [] tier : 1",
"ovn-nbctl find Address_Set 'external_ids{>=}{\"k8s.ovn.org/owner-type\"=AdminNetworkPolicy,\"k8s.ovn.org/name\"=cluster-control}'",
"_uuid : 56e89601-5552-4238-9fc3-8833f5494869 addresses : [\"192.168.194.135\", \"192.168.194.152\", \"192.168.194.193\", \"192.168.194.254\"] external_ids : {direction=Egress, gress-index=\"1\", ip-family=v4, \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:1:v4\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a10706246167277696183 _uuid : 7df9330d-380b-4bdb-8acd-4eddeda2419c addresses : [\"10.132.0.10\", \"10.132.0.11\", \"10.132.0.12\", \"10.132.0.13\", \"10.132.0.14\", \"10.132.0.15\", \"10.132.0.16\", \"10.132.0.17\", \"10.132.0.5\", \"10.132.0.7\", \"10.132.0.71\", \"10.132.0.75\", \"10.132.0.8\", \"10.132.0.81\", \"10.132.0.9\", \"10.132.2.10\", \"10.132.2.11\", \"10.132.2.12\", \"10.132.2.14\", \"10.132.2.15\", \"10.132.2.3\", \"10.132.2.4\", \"10.132.2.5\", \"10.132.2.6\", \"10.132.2.7\", \"10.132.2.8\", \"10.132.2.9\", \"10.132.3.64\", \"10.132.3.65\", \"10.132.3.72\", \"10.132.3.73\", \"10.132.3.76\", \"10.133.0.10\", \"10.133.0.11\", \"10.133.0.12\", \"10.133.0.13\", \"10.133.0.14\", \"10.133.0.15\", \"10.133.0.16\", \"10.133.0.17\", \"10.133.0.18\", \"10.133.0.19\", \"10.133.0.20\", \"10.133.0.21\", \"10.133.0.22\", \"10.133.0.23\", \"10.133.0.24\", \"10.133.0.25\", \"10.133.0.26\", \"10.133.0.27\", \"10.133.0.28\", \"10.133.0.29\", \"10.133.0.30\", \"10.133.0.31\", \"10.133.0.32\", \"10.133.0.33\", \"10.133.0.34\", \"10.133.0.35\", \"10.133.0.36\", \"10.133.0.37\", \"10.133.0.38\", \"10.133.0.39\", \"10.133.0.40\", \"10.133.0.41\", \"10.133.0.42\", \"10.133.0.44\", \"10.133.0.45\", \"10.133.0.46\", \"10.133.0.47\", \"10.133.0.48\", \"10.133.0.5\", \"10.133.0.6\", \"10.133.0.7\", \"10.133.0.8\", \"10.133.0.9\", \"10.134.0.10\", \"10.134.0.11\", \"10.134.0.12\", \"10.134.0.13\", \"10.134.0.14\", \"10.134.0.15\", \"10.134.0.16\", \"10.134.0.17\", \"10.134.0.18\", \"10.134.0.19\", \"10.134.0.20\", \"10.134.0.21\", \"10.134.0.22\", \"10.134.0.23\", \"10.134.0.24\", \"10.134.0.25\", \"10.134.0.26\", \"10.134.0.27\", \"10.134.0.28\", \"10.134.0.30\", \"10.134.0.31\", \"10.134.0.32\", \"10.134.0.33\", \"10.134.0.34\", \"10.134.0.35\", \"10.134.0.36\", \"10.134.0.37\", \"10.134.0.38\", \"10.134.0.4\", \"10.134.0.42\", \"10.134.0.9\", \"10.135.0.10\", \"10.135.0.11\", \"10.135.0.12\", \"10.135.0.13\", \"10.135.0.14\", \"10.135.0.15\", \"10.135.0.16\", \"10.135.0.17\", \"10.135.0.18\", \"10.135.0.19\", \"10.135.0.23\", \"10.135.0.24\", \"10.135.0.26\", \"10.135.0.27\", \"10.135.0.29\", \"10.135.0.3\", \"10.135.0.4\", \"10.135.0.40\", \"10.135.0.41\", \"10.135.0.42\", \"10.135.0.43\", \"10.135.0.44\", \"10.135.0.5\", \"10.135.0.6\", \"10.135.0.7\", \"10.135.0.8\", \"10.135.0.9\"] external_ids : {direction=Ingress, gress-index=\"4\", ip-family=v4, \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:4:v4\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a13814616246365836720 _uuid : 84d76f13-ad95-4c00-8329-a0b1d023c289 addresses : [\"10.132.3.76\", \"10.135.0.44\"] external_ids : {direction=Egress, gress-index=\"4\", ip-family=v4, \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:4:v4\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a5972452606168369118 _uuid : 0c53e917-f7ee-4256-8f3a-9522c0481e52 addresses : [\"10.132.0.10\", \"10.132.0.11\", \"10.132.0.12\", \"10.132.0.13\", \"10.132.0.14\", \"10.132.0.15\", \"10.132.0.16\", \"10.132.0.17\", \"10.132.0.5\", \"10.132.0.7\", \"10.132.0.71\", \"10.132.0.75\", \"10.132.0.8\", \"10.132.0.81\", \"10.132.0.9\", \"10.132.2.10\", \"10.132.2.11\", \"10.132.2.12\", \"10.132.2.14\", \"10.132.2.15\", \"10.132.2.3\", \"10.132.2.4\", \"10.132.2.5\", \"10.132.2.6\", \"10.132.2.7\", \"10.132.2.8\", \"10.132.2.9\", \"10.132.3.64\", \"10.132.3.65\", \"10.132.3.72\", \"10.132.3.73\", \"10.132.3.76\", \"10.133.0.10\", \"10.133.0.11\", \"10.133.0.12\", \"10.133.0.13\", \"10.133.0.14\", \"10.133.0.15\", \"10.133.0.16\", \"10.133.0.17\", \"10.133.0.18\", \"10.133.0.19\", \"10.133.0.20\", \"10.133.0.21\", \"10.133.0.22\", \"10.133.0.23\", \"10.133.0.24\", \"10.133.0.25\", \"10.133.0.26\", \"10.133.0.27\", \"10.133.0.28\", \"10.133.0.29\", \"10.133.0.30\", \"10.133.0.31\", \"10.133.0.32\", \"10.133.0.33\", \"10.133.0.34\", \"10.133.0.35\", \"10.133.0.36\", \"10.133.0.37\", \"10.133.0.38\", \"10.133.0.39\", \"10.133.0.40\", \"10.133.0.41\", \"10.133.0.42\", \"10.133.0.44\", \"10.133.0.45\", \"10.133.0.46\", \"10.133.0.47\", \"10.133.0.48\", \"10.133.0.5\", \"10.133.0.6\", \"10.133.0.7\", \"10.133.0.8\", \"10.133.0.9\", \"10.134.0.10\", \"10.134.0.11\", \"10.134.0.12\", \"10.134.0.13\", \"10.134.0.14\", \"10.134.0.15\", \"10.134.0.16\", \"10.134.0.17\", \"10.134.0.18\", \"10.134.0.19\", \"10.134.0.20\", \"10.134.0.21\", \"10.134.0.22\", \"10.134.0.23\", \"10.134.0.24\", \"10.134.0.25\", \"10.134.0.26\", \"10.134.0.27\", \"10.134.0.28\", \"10.134.0.30\", \"10.134.0.31\", \"10.134.0.32\", \"10.134.0.33\", \"10.134.0.34\", \"10.134.0.35\", \"10.134.0.36\", \"10.134.0.37\", \"10.134.0.38\", \"10.134.0.4\", \"10.134.0.42\", \"10.134.0.9\", \"10.135.0.10\", \"10.135.0.11\", \"10.135.0.12\", \"10.135.0.13\", \"10.135.0.14\", \"10.135.0.15\", \"10.135.0.16\", \"10.135.0.17\", \"10.135.0.18\", \"10.135.0.19\", \"10.135.0.23\", \"10.135.0.24\", \"10.135.0.26\", \"10.135.0.27\", \"10.135.0.29\", \"10.135.0.3\", \"10.135.0.4\", \"10.135.0.40\", \"10.135.0.41\", \"10.135.0.42\", \"10.135.0.43\", \"10.135.0.44\", \"10.135.0.5\", \"10.135.0.6\", \"10.135.0.7\", \"10.135.0.8\", \"10.135.0.9\"] external_ids : {direction=Egress, gress-index=\"2\", ip-family=v4, \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:2:v4\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a18396736153283155648 _uuid : 5228bf1b-dfd8-40ec-bfa8-95c5bf9aded9 addresses : [] external_ids : {direction=Ingress, gress-index=\"0\", ip-family=v4, \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:0:v4\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a14545668191619617708 _uuid : 46530d69-70da-4558-8c63-884ec9dc4f25 addresses : [\"10.132.2.10\", \"10.132.2.5\", \"10.132.2.6\", \"10.132.2.7\", \"10.132.2.8\", \"10.132.2.9\", \"10.133.0.47\", \"10.134.0.33\", \"10.135.0.10\", \"10.135.0.11\", \"10.135.0.12\", \"10.135.0.19\", \"10.135.0.24\", \"10.135.0.7\", \"10.135.0.8\", \"10.135.0.9\"] external_ids : {direction=Ingress, gress-index=\"1\", ip-family=v4, \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:1:v4\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a6786643370959569281 _uuid : 65fdcdea-0b9f-4318-9884-1b51d231ad1d addresses : [\"10.132.3.72\", \"10.135.0.42\"] external_ids : {direction=Ingress, gress-index=\"2\", ip-family=v4, \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:2:v4\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a13730899355151937870 _uuid : 73eabdb0-36bf-4ca3-b66d-156ac710df4c addresses : [\"10.0.32.0/19\", \"10.0.56.38/32\", \"10.0.69.0/24\", \"10.132.3.72\", \"10.135.0.42\", \"172.29.0.0/30\", \"192.168.194.103\", \"192.168.194.2\"] external_ids : {direction=Egress, gress-index=\"3\", ip-family=v4, \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:3:v4\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a10622494091691694581 _uuid : 50cdbef2-71b5-474b-914c-6fcd1d7712d3 addresses : [\"10.132.0.10\", \"10.132.0.11\", \"10.132.0.12\", \"10.132.0.13\", \"10.132.0.14\", \"10.132.0.15\", \"10.132.0.16\", \"10.132.0.17\", \"10.132.0.5\", \"10.132.0.7\", \"10.132.0.71\", \"10.132.0.75\", \"10.132.0.8\", \"10.132.0.81\", \"10.132.0.9\", \"10.132.2.10\", \"10.132.2.11\", \"10.132.2.12\", \"10.132.2.14\", \"10.132.2.15\", \"10.132.2.3\", \"10.132.2.4\", \"10.132.2.5\", \"10.132.2.6\", \"10.132.2.7\", \"10.132.2.8\", \"10.132.2.9\", \"10.132.3.64\", \"10.132.3.65\", \"10.132.3.72\", \"10.132.3.73\", \"10.132.3.76\", \"10.133.0.10\", \"10.133.0.11\", \"10.133.0.12\", \"10.133.0.13\", \"10.133.0.14\", \"10.133.0.15\", \"10.133.0.16\", \"10.133.0.17\", \"10.133.0.18\", \"10.133.0.19\", \"10.133.0.20\", \"10.133.0.21\", \"10.133.0.22\", \"10.133.0.23\", \"10.133.0.24\", \"10.133.0.25\", \"10.133.0.26\", \"10.133.0.27\", \"10.133.0.28\", \"10.133.0.29\", \"10.133.0.30\", \"10.133.0.31\", \"10.133.0.32\", \"10.133.0.33\", \"10.133.0.34\", \"10.133.0.35\", \"10.133.0.36\", \"10.133.0.37\", \"10.133.0.38\", \"10.133.0.39\", \"10.133.0.40\", \"10.133.0.41\", \"10.133.0.42\", \"10.133.0.44\", \"10.133.0.45\", \"10.133.0.46\", \"10.133.0.47\", \"10.133.0.48\", \"10.133.0.5\", \"10.133.0.6\", \"10.133.0.7\", \"10.133.0.8\", \"10.133.0.9\", \"10.134.0.10\", \"10.134.0.11\", \"10.134.0.12\", \"10.134.0.13\", \"10.134.0.14\", \"10.134.0.15\", \"10.134.0.16\", \"10.134.0.17\", \"10.134.0.18\", \"10.134.0.19\", \"10.134.0.20\", \"10.134.0.21\", \"10.134.0.22\", \"10.134.0.23\", \"10.134.0.24\", \"10.134.0.25\", \"10.134.0.26\", \"10.134.0.27\", \"10.134.0.28\", \"10.134.0.30\", \"10.134.0.31\", \"10.134.0.32\", \"10.134.0.33\", \"10.134.0.34\", \"10.134.0.35\", \"10.134.0.36\", \"10.134.0.37\", \"10.134.0.38\", \"10.134.0.4\", \"10.134.0.42\", \"10.134.0.9\", \"10.135.0.10\", \"10.135.0.11\", \"10.135.0.12\", \"10.135.0.13\", \"10.135.0.14\", \"10.135.0.15\", \"10.135.0.16\", \"10.135.0.17\", \"10.135.0.18\", \"10.135.0.19\", \"10.135.0.23\", \"10.135.0.24\", \"10.135.0.26\", \"10.135.0.27\", \"10.135.0.29\", \"10.135.0.3\", \"10.135.0.4\", \"10.135.0.40\", \"10.135.0.41\", \"10.135.0.42\", \"10.135.0.43\", \"10.135.0.44\", \"10.135.0.5\", \"10.135.0.6\", \"10.135.0.7\", \"10.135.0.8\", \"10.135.0.9\"] external_ids : {direction=Egress, gress-index=\"0\", ip-family=v4, \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:0:v4\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a13517855690389298082 _uuid : 32a42f32-2d11-43dd-979d-a56d7ee6aa57 addresses : [\"10.132.3.76\", \"10.135.0.44\"] external_ids : {direction=Ingress, gress-index=\"3\", ip-family=v4, \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Ingress:3:v4\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a764182844364804195 _uuid : 8fd3b977-6e1c-47aa-82b7-e3e3136c4a72 addresses : [\"0.0.0.0/0\"] external_ids : {direction=Egress, gress-index=\"5\", ip-family=v4, \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:5:v4\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a11452480169090787059",
"ovn-nbctl find Address_Set 'external_ids{>=}{\"k8s.ovn.org/owner-type\"=AdminNetworkPolicy,direction=Egress,\"k8s.ovn.org/name\"=cluster-control,gress-index=\"5\"}'",
"_uuid : 8fd3b977-6e1c-47aa-82b7-e3e3136c4a72 addresses : [\"0.0.0.0/0\"] external_ids : {direction=Egress, gress-index=\"5\", ip-family=v4, \"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control:Egress:5:v4\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a11452480169090787059",
"ovn-nbctl find Port_Group 'external_ids{>=}{\"k8s.ovn.org/owner-type\"=AdminNetworkPolicy,\"k8s.ovn.org/name\"=cluster-control}'",
"_uuid : f50acf71-7488-4b9a-b7b8-c8a024e99d21 acls : [04f20275-c410-405c-a923-0e677f767889, 0d5e4722-b608-4bb1-b625-23c323cc9926, 1a27d30e-3f96-4915-8ddd-ade7f22c117b, 1a68a5ed-e7f9-47d0-b55c-89184d97e81a, 4b5d836a-e0a3-4088-825e-f9f0ca58e538, 5a6e5bb4-36eb-4209-b8bc-c611983d4624, 5d09957d-d2cc-4f5a-9ddd-b97d9d772023, aa1a224d-7960-4952-bdfb-35246bafbac8, b23a087f-08f8-4225-8c27-4a9a9ee0c407, b7be6472-df67-439c-8c9c-f55929f0a6e0, d14ed5cf-2e06-496e-8cae-6b76d5dd5ccd] external_ids : {\"k8s.ovn.org/id\"=\"default-network-controller:AdminNetworkPolicy:cluster-control\", \"k8s.ovn.org/name\"=cluster-control, \"k8s.ovn.org/owner-controller\"=default-network-controller, \"k8s.ovn.org/owner-type\"=AdminNetworkPolicy} name : a14645450421485494999 ports : [5e75f289-8273-4f8a-8798-8c10f7318833, de7e1b71-6184-445d-93e7-b20acadf41ea]",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: []",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {}",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-router spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" 1 podSelector: {} policyTypes: - Ingress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-hostnetwork spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/host-network: \"\" podSelector: {} policyTypes: - Ingress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchLabels: role: frontend - from: - podSelector: matchLabels: role: backend",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchExpressions: - {key: role, operator: In, values: [frontend, backend]}",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy1 spec: podSelector: matchLabels: role: db ingress: - from: - podSelector: matchLabels: role: frontend --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy2 spec: podSelector: matchLabels: role: client ingress: - from: - podSelector: matchLabels: role: frontend",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy3 spec: podSelector: matchExpressions: - {key: role, operator: In, values: [db, client]} ingress: - from: - podSelector: matchLabels: role: frontend",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017",
"touch <policy_name>.yaml",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} policyTypes: - Ingress ingress: []",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {}",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-traffic-pod spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y",
"oc apply -f <policy_name>.yaml -n <namespace>",
"networkpolicy.networking.k8s.io/deny-by-default created",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: default 1 spec: podSelector: {} 2 ingress: [] 3",
"oc apply -f deny-by-default.yaml",
"networkpolicy.networking.k8s.io/deny-by-default created",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-external namespace: default spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {}",
"oc apply -f web-allow-external.yaml",
"networkpolicy.networking.k8s.io/web-allow-external created",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-all-namespaces namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2",
"oc apply -f web-allow-all-namespaces.yaml",
"networkpolicy.networking.k8s.io/web-allow-all-namespaces created",
"oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80",
"oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh",
"wget -qO- --timeout=2 http://web.default",
"<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-prod namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2",
"oc apply -f web-allow-prod.yaml",
"networkpolicy.networking.k8s.io/web-allow-prod created",
"oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80",
"oc create namespace prod",
"oc label namespace/prod purpose=production",
"oc create namespace dev",
"oc label namespace/dev purpose=testing",
"oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh",
"wget -qO- --timeout=2 http://web.default",
"wget: download timed out",
"oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh",
"wget -qO- --timeout=2 http://web.default",
"<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017",
"oc get networkpolicy",
"oc describe networkpolicy <policy_name> -n <namespace>",
"oc describe networkpolicy allow-same-namespace",
"Name: allow-same-namespace Namespace: ns1 Created on: 2021-05-24 22:28:56 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: PodSelector: <none> Not affecting egress traffic Policy Types: Ingress",
"oc get networkpolicy",
"oc apply -n <namespace> -f <policy_file>.yaml",
"oc edit networkpolicy <policy_name> -n <namespace>",
"oc describe networkpolicy <policy_name> -n <namespace>",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017",
"oc delete networkpolicy <policy_name> -n <namespace>",
"networkpolicy.networking.k8s.io/default-deny deleted",
"oc adm create-bootstrap-project-template -o yaml > template.yaml",
"oc create -f template.yaml -n openshift-config",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>",
"oc edit template <project_template> -n openshift-config",
"objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress",
"oc new-project <project> 1",
"oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s",
"cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" podSelector: {} policyTypes: - Ingress EOF",
"cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF",
"cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF",
"cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress EOF",
"oc describe networkpolicy",
"Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: \"null\" maxFileSize: 50 rateLimit: 20 syslogFacility: local0",
"kind: Namespace apiVersion: v1 metadata: name: example1 annotations: k8s.ovn.org/acl-logging: |- { \"deny\": \"info\", \"allow\": \"info\" }",
"<timestamp>|<message_serial>|acl_log(ovn_pinctrl0)|<severity>|name=\"<acl_name>\", verdict=\"<verdict>\", severity=\"<severity>\", direction=\"<direction>\": <flow>",
"<proto>,vlan_tci=0x0000,dl_src=<src_mac>,dl_dst=<source_mac>,nw_src=<source_ip>,nw_dst=<target_ip>,nw_tos=<tos_dscp>,nw_ecn=<tos_ecn>,nw_ttl=<ip_ttl>,nw_frag=<fragment>,tp_src=<tcp_src_port>,tp_dst=<tcp_dst_port>,tcp_flags=<tcp_flags>",
"2023-11-02T16:28:54.139Z|00004|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:55.187Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:57.235Z|00006|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: annotations: k8s.ovn.org/acl-logging: '{ \"deny\": \"alert\", \"allow\": \"alert\", \"pass\" : \"warning\" }' name: anp-tenant-log spec: priority: 5 subject: namespaces: matchLabels: tenant: backend-storage # Selects all pods owned by storage tenant. ingress: - name: \"allow-all-ingress-product-development-and-customer\" # Product development and customer tenant ingress to backend storage. action: \"Allow\" from: - pods: namespaceSelector: matchExpressions: - key: tenant operator: In values: - product-development - customer podSelector: {} - name: \"pass-all-ingress-product-security\" action: \"Pass\" from: - namespaces: matchLabels: tenant: product-security - name: \"deny-all-ingress\" # Ingress to backend from all other pods in the cluster. action: \"Deny\" from: - namespaces: {} egress: - name: \"allow-all-egress-product-development\" action: \"Allow\" to: - pods: namespaceSelector: matchLabels: tenant: product-development podSelector: {} - name: \"pass-egress-product-security\" action: \"Pass\" to: - namespaces: matchLabels: tenant: product-security - name: \"deny-all-egress\" # Egress from backend denied to all other pods. action: \"Deny\" to: - namespaces: {}",
"2024-06-10T16:27:45.194Z|00052|acl_log(ovn_pinctrl0)|INFO|name=\"ANP:anp-tenant-log:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:1a,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.26,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=57814,tp_dst=8080,tcp_flags=syn 2024-06-10T16:28:23.130Z|00059|acl_log(ovn_pinctrl0)|INFO|name=\"ANP:anp-tenant-log:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:18,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.24,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=38620,tp_dst=8080,tcp_flags=ack 2024-06-10T16:28:38.293Z|00069|acl_log(ovn_pinctrl0)|INFO|name=\"ANP:anp-tenant-log:Egress:0\", verdict=allow, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:19,dl_dst=0a:58:0a:80:02:1a,nw_src=10.128.2.25,nw_dst=10.128.2.26,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=47566,tp_dst=8080,tcp_flags=fin|ack=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=55704,tp_dst=8080,tcp_flags=ack",
"2024-06-10T16:33:12.019Z|00075|acl_log(ovn_pinctrl0)|INFO|name=\"ANP:anp-tenant-log:Ingress:1\", verdict=pass, severity=warning, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:1b,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.27,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=37394,tp_dst=8080,tcp_flags=ack 2024-06-10T16:35:04.209Z|00081|acl_log(ovn_pinctrl0)|INFO|name=\"ANP:anp-tenant-log:Egress:1\", verdict=pass, severity=warning, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:19,dl_dst=0a:58:0a:80:02:1b,nw_src=10.128.2.25,nw_dst=10.128.2.27,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=34018,tp_dst=8080,tcp_flags=ack",
"2024-06-10T16:43:05.287Z|00087|acl_log(ovn_pinctrl0)|INFO|name=\"ANP:anp-tenant-log:Egress:2\", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:19,dl_dst=0a:58:0a:80:02:18,nw_src=10.128.2.25,nw_dst=10.128.2.24,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=51598,tp_dst=8080,tcp_flags=syn 2024-06-10T16:44:43.591Z|00090|acl_log(ovn_pinctrl0)|INFO|name=\"ANP:anp-tenant-log:Ingress:2\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:1c,dl_dst=0a:58:0a:80:02:19,nw_src=10.128.2.28,nw_dst=10.128.2.25,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=33774,tp_dst=8080,tcp_flags=syn",
"apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: annotations: k8s.ovn.org/acl-logging: '{ \"deny\": \"alert\", \"allow\": \"alert\"}' name: default spec: subject: namespaces: matchLabels: tenant: workloads # Selects all workload pods in the cluster. ingress: - name: \"default-allow-dns\" # This rule allows ingress from dns tenant to all workloads. action: \"Allow\" from: - namespaces: matchLabels: tenant: dns - name: \"default-deny-dns\" # This rule denies all ingress from all pods to workloads. action: \"Deny\" from: - namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well. egress: - name: \"default-deny-dns\" # This rule denies all egress from workloads. It will be applied when no ANP or network policy matches. action: \"Deny\" to: - namespaces: {} # Use the empty selector with caution because it also selects OpenShift namespaces as well.",
"2024-06-10T18:11:58.263Z|00022|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=syn 2024-06-10T18:11:58.264Z|00023|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=psh|ack 2024-06-10T18:11:58.264Z|00024|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=ack 2024-06-10T18:11:58.264Z|00025|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=ack 2024-06-10T18:11:58.264Z|00026|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=fin|ack 2024-06-10T18:11:58.264Z|00027|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:57,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.87,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=60510,tp_dst=8080,tcp_flags=ack",
"2024-06-10T18:09:57.774Z|00016|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Egress:0\", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:56,dl_dst=0a:58:0a:82:02:57,nw_src=10.130.2.86,nw_dst=10.130.2.87,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=45614,tp_dst=8080,tcp_flags=syn 2024-06-10T18:09:58.809Z|00017|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Egress:0\", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:56,dl_dst=0a:58:0a:82:02:57,nw_src=10.130.2.86,nw_dst=10.130.2.87,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=45614,tp_dst=8080,tcp_flags=syn 2024-06-10T18:10:00.857Z|00018|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Egress:0\", verdict=drop, severity=alert, direction=from-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:56,dl_dst=0a:58:0a:82:02:57,nw_src=10.130.2.86,nw_dst=10.130.2.87,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=45614,tp_dst=8080,tcp_flags=syn 2024-06-10T18:10:25.414Z|00019|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:1\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:58,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.88,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=40630,tp_dst=8080,tcp_flags=syn 2024-06-10T18:10:26.457Z|00020|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:1\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:58,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.88,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=40630,tp_dst=8080,tcp_flags=syn 2024-06-10T18:10:28.505Z|00021|acl_log(ovn_pinctrl0)|INFO|name=\"BANP:default:Ingress:1\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:58,dl_dst=0a:58:0a:82:02:56,nw_src=10.130.2.88,nw_dst=10.130.2.86,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,tp_src=40630,tp_dst=8080,tcp_flags=syn",
"oc edit network.operator.openshift.io/cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: \"null\" maxFileSize: 50 rateLimit: 20 syslogFacility: local0",
"cat <<EOF| oc create -f - kind: Namespace apiVersion: v1 metadata: name: verify-audit-logging annotations: k8s.ovn.org/acl-logging: '{ \"deny\": \"alert\", \"allow\": \"alert\" }' EOF",
"namespace/verify-audit-logging created",
"cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all spec: podSelector: matchLabels: policyTypes: - Ingress - Egress --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace namespace: verify-audit-logging spec: podSelector: {} policyTypes: - Ingress - Egress ingress: - from: - podSelector: {} egress: - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: verify-audit-logging EOF",
"networkpolicy.networking.k8s.io/deny-all created networkpolicy.networking.k8s.io/allow-from-same-namespace created",
"cat <<EOF| oc create -n default -f - apiVersion: v1 kind: Pod metadata: name: client spec: containers: - name: client image: registry.access.redhat.com/rhel7/rhel-tools command: [\"/bin/sh\", \"-c\"] args: [\"sleep inf\"] EOF",
"for name in client server; do cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: v1 kind: Pod metadata: name: USD{name} spec: containers: - name: USD{name} image: registry.access.redhat.com/rhel7/rhel-tools command: [\"/bin/sh\", \"-c\"] args: [\"sleep inf\"] EOF done",
"pod/client created pod/server created",
"POD_IP=USD(oc get pods server -n verify-audit-logging -o jsonpath='{.status.podIP}')",
"oc exec -it client -n default -- /bin/ping -c 2 USDPOD_IP",
"PING 10.128.2.55 (10.128.2.55) 56(84) bytes of data. --- 10.128.2.55 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 2041ms",
"oc exec -it client -n verify-audit-logging -- /bin/ping -c 2 USDPOD_IP",
"PING 10.128.0.86 (10.128.0.86) 56(84) bytes of data. 64 bytes from 10.128.0.86: icmp_seq=1 ttl=64 time=2.21 ms 64 bytes from 10.128.0.86: icmp_seq=2 ttl=64 time=0.440 ms --- 10.128.0.86 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.440/1.329/2.219/0.890 ms",
"for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done",
"2023-11-02T16:28:54.139Z|00004|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:55.187Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:28:57.235Z|00006|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:Ingress\", verdict=drop, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:01,dl_dst=0a:58:0a:81:02:23,nw_src=10.131.0.39,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=58496,tp_dst=8080,tcp_flags=syn 2023-11-02T16:49:57.909Z|00028|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Egress:0\", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:57.909Z|00029|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00030|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Egress:0\", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00031|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0",
"oc annotate namespace <namespace> k8s.ovn.org/acl-logging='{ \"deny\": \"alert\", \"allow\": \"notice\" }'",
"kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: |- { \"deny\": \"alert\", \"allow\": \"notice\" }",
"namespace/verify-audit-logging annotated",
"for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done",
"2023-11-02T16:49:57.909Z|00028|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Egress:0\", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:57.909Z|00029|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00030|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Egress:0\", verdict=allow, severity=alert, direction=from-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0 2023-11-02T16:49:58.932Z|00031|acl_log(ovn_pinctrl0)|INFO|name=\"NP:verify-audit-logging:allow-from-same-namespace:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:81:02:22,dl_dst=0a:58:0a:81:02:23,nw_src=10.129.2.34,nw_dst=10.129.2.35,nw_tos=0,nw_ecn=0,nw_ttl=64,nw_frag=no,icmp_type=8,icmp_code=0",
"oc annotate --overwrite namespace <namespace> k8s.ovn.org/acl-logging-",
"kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: null",
"namespace/verify-audit-logging annotated",
"oc get egressfirewall --all-namespaces",
"oc describe egressfirewall <policy_name>",
"Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0",
"oc get -n <project> egressfirewall",
"oc get -n <project> egressfirewall <name> -o yaml > <filename>.yaml",
"oc replace -f <filename>.yaml",
"oc get -n <project> egressfirewall",
"oc delete -n <project> egressfirewall <name>",
"apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow - to: cidrSelector: 0.0.0.0/0 3 type: Deny",
"apiVersion: networking.openshift.io/v1alpha1 kind: DNSNameResolver spec: name: www.example.com. 1 status: resolvedNames: - dnsName: www.example.com. 2 resolvedAddress: - ip: \"1.2.3.4\" 3 ttlSeconds: 60 4 lastLookupTime: \"2023-08-08T15:07:04Z\" 5",
"apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: <name> 1 spec: egress: 2",
"egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4 nodeSelector: <label_name>: <label_value> 5 ports: 6",
"ports: - port: <port> 1 protocol: <protocol> 2",
"apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Deny to: cidrSelector: 0.0.0.0/0",
"apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: - type: Deny to: cidrSelector: 172.16.1.1/32 ports: - port: 80 protocol: TCP - port: 443",
"apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: - to: nodeSelector: matchLabels: region: east type: Allow",
"oc create -f <policy_name>.yaml -n <project>",
"oc create -f default.yaml -n project1",
"egressfirewall.k8s.ovn.org/v1 created",
"oc patch networks.operator.openshift.io cluster --type=merge -p '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"ipsecConfig\":{ \"mode\":<mode> }}}}}'",
"oc get pods -n openshift-ovn-kubernetes -l=app=ovnkube-node",
"ovnkube-node-5xqbf 8/8 Running 0 28m ovnkube-node-6mwcx 8/8 Running 0 29m ovnkube-node-ck5fr 8/8 Running 0 31m ovnkube-node-fr4ld 8/8 Running 0 26m ovnkube-node-wgs4l 8/8 Running 0 33m ovnkube-node-zfvcl 8/8 Running 0 34m",
"oc -n openshift-ovn-kubernetes rsh ovnkube-node-<XXXXX> ovn-nbctl --no-leader-only get nb_global . ipsec",
"true",
"oc get nodes",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ipsec-config spec: nodeSelector: kubernetes.io/hostname: \"<hostname>\" 1 desiredState: interfaces: - name: <interface_name> 2 type: ipsec libreswan: left: <cluster_node> 3 leftid: '%fromcert' leftrsasigkey: '%cert' leftcert: left_server leftmodecfgclient: false right: <external_host> 4 rightid: '%fromcert' rightrsasigkey: '%cert' rightsubnet: <external_address>/32 5 ikev2: insist type: transport",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ipsec-config spec: nodeSelector: kubernetes.io/hostname: \"<hostname>\" 1 desiredState: interfaces: - name: <interface_name> 2 type: ipsec libreswan: left: <cluster_node> 3 leftid: '%fromcert' leftmodecfgclient: false leftrsasigkey: '%cert' leftcert: left_server right: <external_host> 4 rightid: '%fromcert' rightrsasigkey: '%cert' rightsubnet: <external_address>/32 5 ikev2: insist type: tunnel",
"oc create -f ipsec-config.yaml",
"for role in master worker; do cat >> \"99-ipsec-USD{role}-endpoint-config.bu\" <<-EOF variant: openshift version: 4.18.0 metadata: name: 99-USD{role}-import-certs labels: machineconfiguration.openshift.io/role: USDrole systemd: units: - name: ipsec-import.service enabled: true contents: | [Unit] Description=Import external certs into ipsec NSS Before=ipsec.service [Service] Type=oneshot ExecStart=/usr/local/bin/ipsec-addcert.sh RemainAfterExit=false StandardOutput=journal [Install] WantedBy=multi-user.target storage: files: - path: /etc/pki/certs/ca.pem mode: 0400 overwrite: true contents: local: ca.pem - path: /etc/pki/certs/left_server.p12 mode: 0400 overwrite: true contents: local: left_server.p12 - path: /usr/local/bin/ipsec-addcert.sh mode: 0740 overwrite: true contents: inline: | #!/bin/bash -e echo \"importing cert to NSS\" certutil -A -n \"CA\" -t \"CT,C,C\" -d /var/lib/ipsec/nss/ -i /etc/pki/certs/ca.pem pk12util -W \"\" -i /etc/pki/certs/left_server.p12 -d /var/lib/ipsec/nss/ certutil -M -n \"left_server\" -t \"u,u,u\" -d /var/lib/ipsec/nss/ EOF done",
"for role in master worker; do butane -d . 99-ipsec-USD{role}-endpoint-config.bu -o ./99-ipsec-USDrole-endpoint-config.yaml done",
"for role in master worker; do oc apply -f 99-ipsec-USD{role}-endpoint-config.yaml done",
"oc get mcp",
"oc get mc | grep ipsec",
"80-ipsec-master-extensions 3.2.0 6d15h 80-ipsec-worker-extensions 3.2.0 6d15h",
"oc get mcp master -o yaml | grep 80-ipsec-master-extensions -c",
"2",
"oc get mcp worker -o yaml | grep 80-ipsec-worker-extensions -c",
"2",
"kind: NodeNetworkConfigurationPolicy apiVersion: nmstate.io/v1 metadata: name: <name> spec: nodeSelector: kubernetes.io/hostname: <node_name> desiredState: interfaces: - name: <tunnel_name> type: ipsec state: absent",
"oc apply -f remove-ipsec-tunnel.yaml",
"oc patch networks.operator.openshift.io cluster --type=merge -p '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"ipsecConfig\":{ \"mode\":\"Disabled\" }}}}}'",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External 3 dnsManagementPolicy: Unmanaged 4",
"apply -f <name>.yaml 1",
"SCOPE=USD(oc -n openshift-ingress-operator get ingresscontroller <name> -o=jsonpath=\"{.status.endpointPublishingStrategy.loadBalancer.scope}\") -n openshift-ingress-operator patch ingresscontrollers/<name> --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\":{\"type\":\"LoadBalancerService\",\"loadBalancer\":{\"dnsManagementPolicy\":\"Unmanaged\", \"scope\":\"USD{SCOPE}\"}}}}'",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: # networkDiagnostics: 1 mode: \"All\" 2 sourcePlacement: 3 nodeSelector: checkNodes: groupA tolerations: - key: myTaint effect: NoSchedule operator: Exists targetPlacement: 4 nodeSelector: checkNodes: groupB tolerations: - key: myOtherTaint effect: NoExecute operator: Exists",
"oc edit network.config.openshift.io cluster",
"oc get pods -n openshift-network-diagnostics -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES network-check-source-84c69dbd6b-p8f7n 1/1 Running 0 9h 10.131.0.8 ip-10-0-40-197.us-east-2.compute.internal <none> <none> network-check-target-46pct 1/1 Running 0 9h 10.131.0.6 ip-10-0-40-197.us-east-2.compute.internal <none> <none> network-check-target-8kwgf 1/1 Running 0 9h 10.128.2.4 ip-10-0-95-74.us-east-2.compute.internal <none> <none> network-check-target-jc6n7 1/1 Running 0 9h 10.129.2.4 ip-10-0-21-151.us-east-2.compute.internal <none> <none> network-check-target-lvwnn 1/1 Running 0 9h 10.128.0.7 ip-10-0-17-129.us-east-2.compute.internal <none> <none> network-check-target-nslvj 1/1 Running 0 9h 10.130.0.7 ip-10-0-89-148.us-east-2.compute.internal <none> <none> network-check-target-z2sfx 1/1 Running 0 9h 10.129.0.4 ip-10-0-60-253.us-east-2.compute.internal <none> <none>",
"oc get network.config.openshift.io cluster -o yaml",
"status: # conditions: - lastTransitionTime: \"2024-05-27T08:28:39Z\" message: \"\" reason: AsExpected status: \"True\" type: NetworkDiagnosticsAvailable",
"oc get podnetworkconnectivitycheck -n openshift-network-diagnostics",
"NAME AGE network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 73m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-default-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-external 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-internal 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-c-n8mbf 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-d-4hnrz 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-service-cluster 75m",
"oc get podnetworkconnectivitycheck <name> -n openshift-network-diagnostics -o yaml",
"apiVersion: controlplane.operator.openshift.io/v1alpha1 kind: PodNetworkConnectivityCheck metadata: name: network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 namespace: openshift-network-diagnostics spec: sourcePod: network-check-source-7c88f6d9f-hmg2f targetEndpoint: 10.0.0.4:6443 tlsClientCert: name: \"\" status: conditions: - lastTransitionTime: \"2021-01-13T20:11:34Z\" message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnectSuccess status: \"True\" type: Reachable failures: - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:10:34Z\" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:09:34Z\" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" outages: - end: \"2021-01-13T20:11:34Z\" endLogs: - latency: 2.032018ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T20:11:34Z\" - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:10:34Z\" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:09:34Z\" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" message: Connectivity restored after 2m59.999789186s start: \"2021-01-13T20:08:34Z\" startLogs: - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" successes: - latency: 2.845865ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:14:34Z\" - latency: 2.926345ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:13:34Z\" - latency: 2.895796ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:12:34Z\" - latency: 2.696844ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:11:34Z\" - latency: 1.502064ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:10:34Z\" - latency: 1.388857ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:09:34Z\" - latency: 1.906383ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:08:34Z\" - latency: 2.089073ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:07:34Z\" - latency: 2.156994ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:06:34Z\" - latency: 1.777043ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:05:34Z\"",
"oc describe network.config cluster",
"Status: Cluster Network: Cidr: 10.217.0.0/22 Host Prefix: 23 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 10.217.4.0/23",
"dhcp-option-force=26,<mtu>",
"oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0",
"[connection-<interface>-mtu] match-device=interface-name:<interface> ethernet.mtu=<mtu>",
"variant: openshift version: 4.18.0 metadata: name: 01-control-plane-interface labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600",
"variant: openshift version: 4.18.0 metadata: name: 01-worker-interface labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600",
"for manifest in control-plane-interface worker-interface; do butane --files-dir . USDmanifest.bu > USDmanifest.yaml done",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": <overlay_from>, \"to\": <overlay_to> } , \"machine\": { \"to\" : <machine_to> } } } } }'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": 1400, \"to\": 9000 } , \"machine\": { \"to\" : 9100} } } } }'",
"oc get machineconfigpools",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml | grep ExecStart",
"ExecStart=/usr/local/bin/mtu-migration.sh",
"for manifest in control-plane-interface worker-interface; do oc create -f USDmanifest.yaml done",
"oc get machineconfigpools",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml | grep path:",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": null, \"defaultNetwork\":{ \"ovnKubernetesConfig\": { \"mtu\": <mtu> }}}}'",
"oc get machineconfigpools",
"oc describe network.config cluster",
"oc get nodes",
"oc debug node/<node> -- chroot /host ip address show <interface>",
"ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8051",
"oc patch network.config.openshift.io cluster --type=merge -p '{ \"spec\": { \"serviceNodePortRange\": \"30000-<port>\" } }'",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: serviceNodePortRange: \"30000-<port>\"",
"network.config.openshift.io/cluster patched",
"oc get configmaps -n openshift-kube-apiserver config -o jsonpath=\"{.data['config\\.yaml']}\" | grep -Eo '\"service-node-port-range\":[\"[[:digit:]]+-[[:digit:]]+\"]'",
"\"service-node-port-range\":[\"30000-33000\"]",
"oc get network.operator.openshift.io -o jsonpath=\"{.items[0].spec.clusterNetwork}\"",
"[{\"cidr\":\"10.217.0.0/22\",\"hostPrefix\":23}]",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\":{ \"clusterNetwork\": [ {\"cidr\":\"<network>/<cidr>\",\"hostPrefix\":<prefix>} ], \"networkType\": \"OVNKubernetes\" } }'",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\":{ \"clusterNetwork\": [ {\"cidr\":\"10.217.0.0/14\",\"hostPrefix\": 23} ], \"networkType\": \"OVNKubernetes\" } }'",
"network.config.openshift.io/cluster patched",
"oc get network.operator.openshift.io -o jsonpath=\"{.items[0].spec.clusterNetwork}\"",
"[{\"cidr\":\"10.217.0.0/14\",\"hostPrefix\":23}]",
"oc create sa ipfailover",
"oc adm policy add-scc-to-user privileged -z ipfailover",
"oc adm policy add-scc-to-user hostnetwork -z ipfailover",
"openstack port show <cluster_name> -c allowed_address_pairs",
"*Field* *Value* allowed_address_pairs ip_address='192.168.0.5', mac_address='fa:16:3e:31:f9:cb' ip_address='192.168.0.7', mac_address='fa:16:3e:31:f9:cb'",
"openstack port set <cluster_name> --allowed-address ip-address=1.1.1.1,mac-address=fa:fa:16:3e:31:f9:cb",
"apiVersion: apps/v1 kind: Deployment metadata: name: ipfailover-keepalived spec: env: - name: OPENSHIFT_HA_VIRTUAL_IPS value: \"1.1.1.1\"",
"apiVersion: apps/v1 kind: Deployment metadata: name: ipfailover-keepalived 1 labels: ipfailover: hello-openshift spec: strategy: type: Recreate replicas: 2 selector: matchLabels: ipfailover: hello-openshift template: metadata: labels: ipfailover: hello-openshift spec: serviceAccountName: ipfailover privileged: true hostNetwork: true nodeSelector: node-role.kubernetes.io/worker: \"\" containers: - name: openshift-ipfailover image: registry.redhat.io/openshift4/ose-keepalived-ipfailover-rhel9:v4.18 ports: - containerPort: 63000 hostPort: 63000 imagePullPolicy: IfNotPresent securityContext: privileged: true volumeMounts: - name: lib-modules mountPath: /lib/modules readOnly: true - name: host-slash mountPath: /host readOnly: true mountPropagation: HostToContainer - name: etc-sysconfig mountPath: /etc/sysconfig readOnly: true - name: config-volume mountPath: /etc/keepalive env: - name: OPENSHIFT_HA_CONFIG_NAME value: \"ipfailover\" - name: OPENSHIFT_HA_VIRTUAL_IPS 2 value: \"1.1.1.1-2\" - name: OPENSHIFT_HA_VIP_GROUPS 3 value: \"10\" - name: OPENSHIFT_HA_NETWORK_INTERFACE 4 value: \"ens3\" #The host interface to assign the VIPs - name: OPENSHIFT_HA_MONITOR_PORT 5 value: \"30060\" - name: OPENSHIFT_HA_VRRP_ID_OFFSET 6 value: \"0\" - name: OPENSHIFT_HA_REPLICA_COUNT 7 value: \"2\" #Must match the number of replicas in the deployment - name: OPENSHIFT_HA_USE_UNICAST value: \"false\" #- name: OPENSHIFT_HA_UNICAST_PEERS #value: \"10.0.148.40,10.0.160.234,10.0.199.110\" - name: OPENSHIFT_HA_IPTABLES_CHAIN 8 value: \"INPUT\" #- name: OPENSHIFT_HA_NOTIFY_SCRIPT 9 # value: /etc/keepalive/mynotifyscript.sh - name: OPENSHIFT_HA_CHECK_SCRIPT 10 value: \"/etc/keepalive/mycheckscript.sh\" - name: OPENSHIFT_HA_PREEMPTION 11 value: \"preempt_delay 300\" - name: OPENSHIFT_HA_CHECK_INTERVAL 12 value: \"2\" livenessProbe: initialDelaySeconds: 10 exec: command: - pgrep - keepalived volumes: - name: lib-modules hostPath: path: /lib/modules - name: host-slash hostPath: path: / - name: etc-sysconfig hostPath: path: /etc/sysconfig # config-volume contains the check script # created with `oc create configmap keepalived-checkscript --from-file=mycheckscript.sh` - configMap: defaultMode: 0755 name: keepalived-checkscript name: config-volume imagePullSecrets: - name: openshift-pull-secret 13",
"#!/bin/bash # Whatever tests are needed # E.g., send request and verify response exit 0",
"oc create configmap mycustomcheck --from-file=mycheckscript.sh",
"oc set env deploy/ipfailover-keepalived OPENSHIFT_HA_CHECK_SCRIPT=/etc/keepalive/mycheckscript.sh",
"oc set volume deploy/ipfailover-keepalived --add --overwrite --name=config-volume --mount-path=/etc/keepalive --source='{\"configMap\": { \"name\": \"mycustomcheck\", \"defaultMode\": 493}}'",
"oc edit deploy ipfailover-keepalived",
"spec: containers: - env: - name: OPENSHIFT_HA_CHECK_SCRIPT 1 value: /etc/keepalive/mycheckscript.sh volumeMounts: 2 - mountPath: /etc/keepalive name: config-volume dnsPolicy: ClusterFirst volumes: 3 - configMap: defaultMode: 0755 4 name: customrouter name: config-volume",
"oc edit deploy ipfailover-keepalived",
"spec: containers: - env: - name: OPENSHIFT_HA_PREEMPTION 1 value: preempt_delay 300",
"spec: env: - name: OPENSHIFT_HA_VIP_GROUPS 1 value: \"3\"",
"oc get pod -l ipfailover -o jsonpath=\" {range .items[?(@.spec.volumes[*].configMap)]} {'Namespace: '}{.metadata.namespace} {'Pod: '}{.metadata.name} {'Volumes that use config maps:'} {range .spec.volumes[?(@.configMap)]} {'volume: '}{.name} {'configMap: '}{.configMap.name}{'\\n'}{end} {end}\"",
"Namespace: default Pod: keepalived-worker-59df45db9c-2x9mn Volumes that use config maps: volume: config-volume configMap: mycustomcheck",
"oc delete configmap <configmap_name>",
"oc get deployment -l ipfailover",
"NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE default ipfailover 2/2 2 2 105d",
"oc delete deployment <ipfailover_deployment_name>",
"oc delete sa ipfailover",
"apiVersion: batch/v1 kind: Job metadata: generateName: remove-ipfailover- labels: app: remove-ipfailover spec: template: metadata: name: remove-ipfailover spec: containers: - name: remove-ipfailover image: registry.redhat.io/openshift4/ose-keepalived-ipfailover-rhel9:v4.18 command: [\"/var/lib/ipfailover/keepalived/remove-failover.sh\"] nodeSelector: 1 kubernetes.io/hostname: <host_name> 2 restartPolicy: Never",
"oc create -f remove-ipfailover-job.yaml",
"job.batch/remove-ipfailover-2h8dm created",
"oc logs job/remove-ipfailover-2h8dm",
"remove-failover.sh: OpenShift IP Failover service terminating. - Removing ip_vs module - Cleaning up - Releasing VIPs (interface eth0)",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: <name> 1 namespace: default 2 spec: config: '{ \"cniVersion\": \"0.4.0\", 3 \"name\": \"<name>\", 4 \"plugins\": [{ \"type\": \"<main_CNI_plugin>\" 5 }, { \"type\": \"tuning\", 6 \"sysctl\": { \"net.ipv4.conf.IFNAME.accept_redirects\": \"1\" 7 } } ] }",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: tuningnad namespace: default spec: config: '{ \"cniVersion\": \"0.4.0\", \"name\": \"tuningnad\", \"plugins\": [{ \"type\": \"bridge\" }, { \"type\": \"tuning\", \"sysctl\": { \"net.ipv4.conf.IFNAME.accept_redirects\": \"1\" } } ] }'",
"oc apply -f tuning-example.yaml",
"networkattachmentdefinition.k8.cni.cncf.io/tuningnad created",
"apiVersion: v1 kind: Pod metadata: name: tunepod namespace: default annotations: k8s.v1.cni.cncf.io/networks: tuningnad 1 spec: containers: - name: podexample image: centos command: [\"/bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 2 runAsGroup: 3000 3 allowPrivilegeEscalation: false 4 capabilities: 5 drop: [\"ALL\"] securityContext: runAsNonRoot: true 6 seccompProfile: 7 type: RuntimeDefault",
"oc apply -f examplepod.yaml",
"oc get pod",
"NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s",
"oc rsh tunepod",
"sh-4.4# sysctl net.ipv4.conf.net1.accept_redirects",
"net.ipv4.conf.net1.accept_redirects = 1",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: <name> 1 namespace: default 2 spec: config: '{ \"cniVersion\": \"0.4.0\", 3 \"name\": \"<name>\", 4 \"plugins\": [{ \"type\": \"<main_CNI_plugin>\" 5 }, { \"type\": \"tuning\", 6 \"allmulti\": true 7 } } ] }",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: setallmulti namespace: default spec: config: '{ \"cniVersion\": \"0.4.0\", \"name\": \"setallmulti\", \"plugins\": [ { \"type\": \"bridge\" }, { \"type\": \"tuning\", \"allmulti\": true } ] }'",
"oc apply -f tuning-allmulti.yaml",
"networkattachmentdefinition.k8s.cni.cncf.io/setallmulti created",
"apiVersion: v1 kind: Pod metadata: name: allmultipod namespace: default annotations: k8s.v1.cni.cncf.io/networks: setallmulti 1 spec: containers: - name: podexample image: centos command: [\"/bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 2 runAsGroup: 3000 3 allowPrivilegeEscalation: false 4 capabilities: 5 drop: [\"ALL\"] securityContext: runAsNonRoot: true 6 seccompProfile: 7 type: RuntimeDefault",
"oc apply -f examplepod.yaml",
"oc get pod",
"NAME READY STATUS RESTARTS AGE allmultipod 1/1 Running 0 23s",
"oc rsh allmultipod",
"sh-4.4# ip link",
"1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue state UP mode DEFAULT group default link/ether 0a:58:0a:83:00:10 brd ff:ff:ff:ff:ff:ff link-netnsid 0 1 3: net1@if24: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether ee:9b:66:a4:ec:1d brd ff:ff:ff:ff:ff:ff link-netnsid 0 2",
"apiVersion: v1 kind: Pod metadata: namespace: project1 name: example-pod spec: containers: - name: example-pod ports: - containerPort: 30100 name: sctpserver protocol: SCTP",
"apiVersion: v1 kind: Service metadata: namespace: project1 name: sctpserver spec: ports: - name: sctpserver protocol: SCTP port: 30100 targetPort: 30100 type: ClusterIP",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-sctp-on-http spec: podSelector: matchLabels: role: web ingress: - ports: - protocol: SCTP port: 80",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: load-sctp-module labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/sctp-blacklist.conf mode: 0644 overwrite: true contents: source: data:, - path: /etc/modules-load.d/sctp-load.conf mode: 0644 overwrite: true contents: source: data:,sctp",
"oc create -f load-sctp-module.yaml",
"oc get nodes",
"apiVersion: v1 kind: Pod metadata: name: sctpserver labels: app: sctpserver spec: containers: - name: sctpserver image: registry.access.redhat.com/ubi9/ubi command: [\"/bin/sh\", \"-c\"] args: [\"dnf install -y nc && sleep inf\"] ports: - containerPort: 30102 name: sctpserver protocol: SCTP",
"oc create -f sctp-server.yaml",
"apiVersion: v1 kind: Service metadata: name: sctpservice labels: app: sctpserver spec: type: NodePort selector: app: sctpserver ports: - name: sctpserver protocol: SCTP port: 30102 targetPort: 30102",
"oc create -f sctp-service.yaml",
"apiVersion: v1 kind: Pod metadata: name: sctpclient labels: app: sctpclient spec: containers: - name: sctpclient image: registry.access.redhat.com/ubi9/ubi command: [\"/bin/sh\", \"-c\"] args: [\"dnf install -y nc && sleep inf\"]",
"oc apply -f sctp-client.yaml",
"oc rsh sctpserver",
"nc -l 30102 --sctp",
"oc get services sctpservice -o go-template='{{.spec.clusterIP}}{{\"\\n\"}}'",
"oc rsh sctpclient",
"nc <cluster_IP> 30102 --sctp",
"apiVersion: v1 kind: Namespace metadata: name: openshift-ptp annotations: workload.openshift.io/allowed: management labels: name: openshift-ptp openshift.io/cluster-monitoring: \"true\"",
"oc create -f ptp-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp spec: targetNamespaces: - openshift-ptp",
"oc create -f ptp-operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp spec: channel: \"stable\" name: ptp-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f ptp-sub.yaml",
"oc get csv -n openshift-ptp -o custom-columns=Name:.metadata.name,Phase:.status.phase",
"Name Phase 4.18.0-202301261535 Succeeded",
"oc get NodePtpDevice -n openshift-ptp -o yaml",
"apiVersion: v1 items: - apiVersion: ptp.openshift.io/v1 kind: NodePtpDevice metadata: creationTimestamp: \"2022-01-27T15:16:28Z\" generation: 1 name: dev-worker-0 1 namespace: openshift-ptp resourceVersion: \"6538103\" uid: d42fc9ad-bcbf-4590-b6d8-b676c642781a spec: {} status: devices: 2 - name: eno1 - name: eno2 - name: eno3 - name: eno4 - name: enp5s0f0 - name: enp5s0f1",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: \"grandmaster\" ptp4lOpts: \"-2 --summary_interval -4\" phc2sysOpts: -r -u 0 -m -w -N 8 -R 16 -s USDiface_master -n 24 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" plugins: e810: enableDefaultConfig: false settings: LocalMaxHoldoverOffSet: 1500 LocalHoldoverTimeout: 14400 MaxInSpecOffset: 100 pins: USDe810_pins # \"USDiface_master\": # \"U.FL2\": \"0 2\" # \"U.FL1\": \"0 1\" # \"SMA2\": \"0 2\" # \"SMA1\": \"0 1\" ublxCmds: - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 - \"-P\" - \"29.20\" - \"-z\" - \"CFG-HW-ANT_CFG_VOLTCTRL,1\" reportOutput: false - args: #ubxtool -P 29.20 -e GPS - \"-P\" - \"29.20\" - \"-e\" - \"GPS\" reportOutput: false - args: #ubxtool -P 29.20 -d Galileo - \"-P\" - \"29.20\" - \"-d\" - \"Galileo\" reportOutput: false - args: #ubxtool -P 29.20 -d GLONASS - \"-P\" - \"29.20\" - \"-d\" - \"GLONASS\" reportOutput: false - args: #ubxtool -P 29.20 -d BeiDou - \"-P\" - \"29.20\" - \"-d\" - \"BeiDou\" reportOutput: false - args: #ubxtool -P 29.20 -d SBAS - \"-P\" - \"29.20\" - \"-d\" - \"SBAS\" reportOutput: false - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000 - \"-P\" - \"29.20\" - \"-t\" - \"-w\" - \"5\" - \"-v\" - \"1\" - \"-e\" - \"SURVEYIN,600,50000\" reportOutput: true - args: #ubxtool -P 29.20 -p MON-HW - \"-P\" - \"29.20\" - \"-p\" - \"MON-HW\" reportOutput: true - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,248 - \"-P\" - \"29.20\" - \"-p\" - \"CFG-MSG,1,38,248\" reportOutput: true ts2phcOpts: \" \" ts2phcConf: | [nmea] ts2phc.master 1 [global] use_syslog 0 verbose 1 logging_level 7 ts2phc.pulsewidth 100000000 #cat /dev/GNSS to find available serial port #example value of gnss_serialport is /dev/ttyGNSS_1700_0 ts2phc.nmea_serialport USDgnss_serialport [USDiface_master] ts2phc.extts_polarity rising ts2phc.extts_correction 0 ptp4lConf: | [USDiface_master] masterOnly 1 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 6 clockAccuracy 0x27 offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval 0 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0x20 recommend: - profile: \"grandmaster\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"oc create -f grandmaster-clock-ptp-config.yaml",
"oc get pods -n openshift-ptp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-74m2g 3/3 Running 3 4d15h 10.16.230.7 compute-1.example.com ptp-operator-5f4f48d7c-x7zkf 1/1 Running 1 4d15h 10.128.1.145 compute-1.example.com",
"oc logs linuxptp-daemon-74m2g -n openshift-ptp -c linuxptp-daemon-container",
"ts2phc[94980.334]: [ts2phc.0.config] nmea delay: 98690975 ns ts2phc[94980.334]: [ts2phc.0.config] ens3f0 extts index 0 at 1676577329.999999999 corr 0 src 1676577330.901342528 diff -1 ts2phc[94980.334]: [ts2phc.0.config] ens3f0 master offset -1 s2 freq -1 ts2phc[94980.441]: [ts2phc.0.config] nmea sentence: GNRMC,195453.00,A,4233.24427,N,07126.64420,W,0.008,,160223,,,A,V phc2sys[94980.450]: [ptp4l.0.config] CLOCK_REALTIME phc offset 943 s2 freq -89604 delay 504 phc2sys[94980.512]: [ptp4l.0.config] CLOCK_REALTIME phc offset 1000 s2 freq -89264 delay 474",
"In this example two cards USDiface_nic1 and USDiface_nic2 are connected via SMA1 ports by a cable and USDiface_nic2 receives 1PPS signals from USDiface_nic1 apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: \"grandmaster\" ptp4lOpts: \"-2 --summary_interval -4\" phc2sysOpts: -r -u 0 -m -w -N 8 -R 16 -s USDiface_nic1 -n 24 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" plugins: e810: enableDefaultConfig: false settings: LocalMaxHoldoverOffSet: 1500 LocalHoldoverTimeout: 14400 MaxInSpecOffset: 100 pins: USDe810_pins # \"USDiface_nic1\": # \"U.FL2\": \"0 2\" # \"U.FL1\": \"0 1\" # \"SMA2\": \"0 2\" # \"SMA1\": \"2 1\" # \"USDiface_nic2\": # \"U.FL2\": \"0 2\" # \"U.FL1\": \"0 1\" # \"SMA2\": \"0 2\" # \"SMA1\": \"1 1\" ublxCmds: - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 - \"-P\" - \"29.20\" - \"-z\" - \"CFG-HW-ANT_CFG_VOLTCTRL,1\" reportOutput: false - args: #ubxtool -P 29.20 -e GPS - \"-P\" - \"29.20\" - \"-e\" - \"GPS\" reportOutput: false - args: #ubxtool -P 29.20 -d Galileo - \"-P\" - \"29.20\" - \"-d\" - \"Galileo\" reportOutput: false - args: #ubxtool -P 29.20 -d GLONASS - \"-P\" - \"29.20\" - \"-d\" - \"GLONASS\" reportOutput: false - args: #ubxtool -P 29.20 -d BeiDou - \"-P\" - \"29.20\" - \"-d\" - \"BeiDou\" reportOutput: false - args: #ubxtool -P 29.20 -d SBAS - \"-P\" - \"29.20\" - \"-d\" - \"SBAS\" reportOutput: false - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000 - \"-P\" - \"29.20\" - \"-t\" - \"-w\" - \"5\" - \"-v\" - \"1\" - \"-e\" - \"SURVEYIN,600,50000\" reportOutput: true - args: #ubxtool -P 29.20 -p MON-HW - \"-P\" - \"29.20\" - \"-p\" - \"MON-HW\" reportOutput: true - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,248 - \"-P\" - \"29.20\" - \"-p\" - \"CFG-MSG,1,38,248\" reportOutput: true ts2phcOpts: \" \" ts2phcConf: | [nmea] ts2phc.master 1 [global] use_syslog 0 verbose 1 logging_level 7 ts2phc.pulsewidth 100000000 #cat /dev/GNSS to find available serial port #example value of gnss_serialport is /dev/ttyGNSS_1700_0 ts2phc.nmea_serialport USDgnss_serialport [USDiface_nic1] ts2phc.extts_polarity rising ts2phc.extts_correction 0 [USDiface_nic2] ts2phc.master 0 ts2phc.extts_polarity rising #this is a measured value in nanoseconds to compensate for SMA cable delay ts2phc.extts_correction -10 ptp4lConf: | [USDiface_nic1] masterOnly 1 [USDiface_nic1_1] masterOnly 1 [USDiface_nic1_2] masterOnly 1 [USDiface_nic1_3] masterOnly 1 [USDiface_nic2] masterOnly 1 [USDiface_nic2_1] masterOnly 1 [USDiface_nic2_2] masterOnly 1 [USDiface_nic2_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 6 clockAccuracy 0x27 offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval 0 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 1 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0x20 recommend: - profile: \"grandmaster\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"oc create -f grandmaster-clock-ptp-config-dual-nics.yaml",
"oc get pods -n openshift-ptp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-74m2g 3/3 Running 3 4d15h 10.16.230.7 compute-1.example.com ptp-operator-5f4f48d7c-x7zkf 1/1 Running 1 4d15h 10.128.1.145 compute-1.example.com",
"oc logs linuxptp-daemon-74m2g -n openshift-ptp -c linuxptp-daemon-container",
"ts2phc[509863.660]: [ts2phc.0.config] nmea delay: 347527248 ns ts2phc[509863.660]: [ts2phc.0.config] ens2f0 extts index 0 at 1705516553.000000000 corr 0 src 1705516553.652499081 diff 0 ts2phc[509863.660]: [ts2phc.0.config] ens2f0 master offset 0 s2 freq -0 I0117 18:35:16.000146 1633226 stats.go:57] state updated for ts2phc =s2 I0117 18:35:16.000163 1633226 event.go:417] dpll State s2, gnss State s2, tsphc state s2, gm state s2, ts2phc[1705516516]:[ts2phc.0.config] ens2f0 nmea_status 1 offset 0 s2 GM[1705516516]:[ts2phc.0.config] ens2f0 T-GM-STATUS s2 ts2phc[509863.677]: [ts2phc.0.config] ens7f0 extts index 0 at 1705516553.000000010 corr -10 src 1705516553.652499081 diff 0 ts2phc[509863.677]: [ts2phc.0.config] ens7f0 master offset 0 s2 freq -0 I0117 18:35:16.016597 1633226 stats.go:57] state updated for ts2phc =s2 phc2sys[509863.719]: [ptp4l.0.config] CLOCK_REALTIME phc offset -6 s2 freq +15441 delay 510 phc2sys[509863.782]: [ptp4l.0.config] CLOCK_REALTIME phc offset -7 s2 freq +15438 delay 502",
"ublxCmds: - args: - \"-P\" - \"29.20\" - \"-z\" - \"CFG-HW-ANT_CFG_VOLTCTRL,1\" - \"-z\" - \"CFG-TP-ANT_CABLEDELAY,<antenna_delay_offset>\" 1 reportOutput: false",
"phc2sysOpts: -r -u 0 -m -w -N 8 -R 16 -S 2 -s ens2f0 -n 24 1",
"- args: #ubxtool -P 29.20 -p CFG-MSG,1,38,248 - \"-P\" - \"29.20\" - \"-p\" - \"CFG-MSG,1,38,248\"",
"oc -n openshift-ptp -c linuxptp-daemon-container exec -it USD(oc -n openshift-ptp get pods -o name | grep daemon) -- ubxtool -t -p NAV-TIMELS -P 29.20",
"1722509534.4417 UBX-NAV-STATUS: iTOW 384752000 gpsFix 5 flags 0xdd fixStat 0x0 flags2 0x8 ttff 18261, msss 1367642864 1722509534.4419 UBX-NAV-TIMELS: iTOW 384752000 version 0 reserved2 0 0 0 srcOfCurrLs 2 currLs 18 srcOfLsChange 2 lsChange 0 timeToLsEvent 70376866 dateOfLsGpsWn 2441 dateOfLsGpsDn 7 reserved2 0 0 0 valid x3 1722509534.4421 UBX-NAV-CLOCK: iTOW 384752000 clkB 784281 clkD 435 tAcc 3 fAcc 215 1722509535.4477 UBX-NAV-STATUS: iTOW 384753000 gpsFix 5 flags 0xdd fixStat 0x0 flags2 0x8 ttff 18261, msss 1367643864 1722509535.4479 UBX-NAV-CLOCK: iTOW 384753000 clkB 784716 clkD 435 tAcc 3 fAcc 218",
"oc -n openshift-ptp get configmap leap-configmap -o jsonpath='{.data.<node_name>}' 1",
"Do not edit This file is generated automatically by linuxptp-daemon #USD 3913697179 #@ 4291747200 2272060800 10 # 1 Jan 1972 2287785600 11 # 1 Jul 1972 2303683200 12 # 1 Jan 1973 2335219200 13 # 1 Jan 1974 2366755200 14 # 1 Jan 1975 2398291200 15 # 1 Jan 1976 2429913600 16 # 1 Jan 1977 2461449600 17 # 1 Jan 1978 2492985600 18 # 1 Jan 1979 2524521600 19 # 1 Jan 1980 2571782400 20 # 1 Jul 1981 2603318400 21 # 1 Jul 1982 2634854400 22 # 1 Jul 1983 2698012800 23 # 1 Jul 1985 2776982400 24 # 1 Jan 1988 2840140800 25 # 1 Jan 1990 2871676800 26 # 1 Jan 1991 2918937600 27 # 1 Jul 1992 2950473600 28 # 1 Jul 1993 2982009600 29 # 1 Jul 1994 3029443200 30 # 1 Jan 1996 3076704000 31 # 1 Jul 1997 3124137600 32 # 1 Jan 1999 3345062400 33 # 1 Jan 2006 3439756800 34 # 1 Jan 2009 3550089600 35 # 1 Jul 2012 3644697600 36 # 1 Jul 2015 3692217600 37 # 1 Jan 2017 #h e65754d4 8f39962b aa854a61 661ef546 d2af0bfa",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-clock namespace: openshift-ptp annotations: {} spec: profile: - name: boundary-clock ptp4lOpts: \"-2\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | # The interface name is hardware-specific [USDiface_slave] masterOnly 0 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 135 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: boundary-clock priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"oc create -f boundary-clock-ptp-config.yaml",
"oc get pods -n openshift-ptp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com",
"oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container",
"I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to: I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------ I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1 I1115 09:41:17.117616 4143292 daemon.go:102] Interface: I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2 I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r -n 24 I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-clock-ptp-config-nic1 namespace: openshift-ptp spec: profile: - name: \"profile1\" ptp4lOpts: \"-2 --summary_interval -4\" ptp4lConf: | 1 [ens5f1] masterOnly 1 [ens5f0] masterOnly 0 phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" 2",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-clock-ptp-config-nic2 namespace: openshift-ptp spec: profile: - name: \"profile2\" ptp4lOpts: \"-2 --summary_interval -4\" ptp4lConf: | 1 [ens7f1] masterOnly 1 [ens7f0] masterOnly 0",
"oc create -f boundary-clock-ptp-config-nic1.yaml",
"oc create -f boundary-clock-ptp-config-nic2.yaml",
"oc logs linuxptp-daemon-cvgr6 -n openshift-ptp -c linuxptp-daemon-container",
"ptp4l[80828.335]: [ptp4l.1.config] master offset 5 s2 freq -5727 path delay 519 ptp4l[80828.343]: [ptp4l.0.config] master offset -5 s2 freq -10607 path delay 533 phc2sys[80828.390]: [ptp4l.0.config] CLOCK_REALTIME phc offset 1 s2 freq -87239 delay 539",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ha-ptp-config-nic1 namespace: openshift-ptp spec: profile: - name: \"ha-ptp-config-profile1\" ptp4lOpts: \"-2 --summary_interval -4\" ptp4lConf: | 1 [ens5f1] masterOnly 1 [ens5f0] masterOnly 0 # phc2sysOpts: \"\" 2",
"oc create -f ha-ptp-config-nic1.yaml",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ha-ptp-config-nic2 namespace: openshift-ptp spec: profile: - name: \"ha-ptp-config-profile2\" ptp4lOpts: \"-2 --summary_interval -4\" ptp4lConf: | [ens7f1] masterOnly 1 [ens7f0] masterOnly 0 # phc2sysOpts: \"\"",
"oc create -f ha-ptp-config-nic2.yaml",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-ha namespace: openshift-ptp annotations: {} spec: profile: - name: \"boundary-ha\" ptp4lOpts: \"\" 1 phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" haProfiles: \"USDprofile1,USDprofile2\" recommend: - profile: \"boundary-ha\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"oc create -f ptp-config-for-ha.yaml",
"oc get pods -n openshift-ptp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkrb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com ptp-operator-657bbq64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com",
"oc logs linuxptp-daemon-4xkrb -n openshift-ptp -c linuxptp-daemon-container",
"I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to: I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------ I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: ha-ptp-config-profile1 I1115 09:41:17.117616 4143292 daemon.go:102] Interface: I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2 I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r -n 24 I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary-clock namespace: openshift-ptp annotations: {} spec: profile: - name: ordinary-clock # The interface name is hardware-specific interface: USDinterface ptp4lOpts: \"-2 -s\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: ordinary-clock priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"oc create -f ordinary-clock-ptp-config.yaml",
"oc get pods -n openshift-ptp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com",
"oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container",
"I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to: I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------ I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1 I1115 09:41:17.117616 4143292 daemon.go:102] Interface: ens787f1 I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2 -s I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r -n 24 I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------",
"oc edit PtpConfig -n openshift-ptp",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: <ptp_config_name> namespace: openshift-ptp spec: profile: - name: \"profile1\" ptpSchedulingPolicy: SCHED_FIFO 1 ptpSchedulingPriority: 10 2",
"oc get pods -n openshift-ptp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com",
"oc -n openshift-ptp logs linuxptp-daemon-lgm55 -c linuxptp-daemon-container|grep chrt",
"I1216 19:24:57.091872 1600715 daemon.go:285] /bin/chrt -f 65 /usr/sbin/ptp4l -f /var/run/ptp4l.0.config -2 --summary_interval -4 -m",
"oc edit PtpConfig -n openshift-ptp",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: <ptp_config_name> namespace: openshift-ptp spec: profile: - name: \"profile1\" ptpSettings: logReduce: \"true\"",
"oc get pods -n openshift-ptp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com",
"oc -n openshift-ptp logs <linux_daemon_container> -c linuxptp-daemon-container | grep \"master offset\" 1",
"oc get pods -n openshift-ptp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com",
"oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io",
"NAME AGE control-plane-0.example.com 10d control-plane-1.example.com 10d compute-0.example.com 10d compute-1.example.com 10d compute-2.example.com 10d",
"oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io <node_name> -o yaml",
"apiVersion: ptp.openshift.io/v1 kind: NodePtpDevice metadata: creationTimestamp: \"2021-09-14T16:52:33Z\" generation: 1 name: compute-0.example.com namespace: openshift-ptp resourceVersion: \"177400\" uid: 30413db0-4d8d-46da-9bef-737bacd548fd spec: {} status: devices: - name: eno1 - name: eno2 - name: eno3 - name: eno4 - name: enp5s0f0 - name: enp5s0f1",
"oc get pods -n openshift-ptp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com",
"oc rsh -n openshift-ptp -c linuxptp-daemon-container <linux_daemon_container>",
"pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET'",
"sending: GET PORT_DATA_SET 40a6b7.fffe.166ef0-1 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 40a6b7.fffe.166ef0-1 portState SLAVE logMinDelayReqInterval -4 peerMeanPathDelay 0 logAnnounceInterval -3 announceReceiptTimeout 3 logSyncInterval -4 delayMechanism 1 logMinPdelayReqInterval -4 versionNumber 2",
"oc rsh -n openshift-ptp -c linuxptp-daemon-container linuxptp-daemon-74m2g ethtool -i ens7f0",
"driver: ice version: 5.14.0-356.bz2232515.el9.x86_64 firmware-version: 4.20 0x8001778b 1.3346.0",
"oc rsh -n openshift-ptp -c linuxptp-daemon-container linuxptp-daemon-jnz6r cat /dev/gnss0",
"USDGNRMC,125223.00,A,4233.24463,N,07126.64561,W,0.000,,300823,,,A,V*0A USDGNVTG,,T,,M,0.000,N,0.000,K,A*3D USDGNGGA,125223.00,4233.24463,N,07126.64561,W,1,12,99.99,98.6,M,-33.1,M,,*7E USDGNGSA,A,3,25,17,19,11,12,06,05,04,09,20,,,99.99,99.99,99.99,1*37 USDGPGSV,3,1,10,04,12,039,41,05,31,222,46,06,50,064,48,09,28,064,42,1*62",
"oc debug node/<node_name>",
"sh-4.4# devlink dev info <bus_name>/<device_name> | grep cgu",
"cgu.id 36 1 fw.cgu 8032.16973825.6021 2",
"oc adm must-gather --image=registry.redhat.io/openshift4/ptp-must-gather-rhel8:v4.18",
"apiVersion: ptp.openshift.io/v1 kind: PtpOperatorConfig metadata: name: default namespace: openshift-ptp spec: daemonNodeSelector: node-role.kubernetes.io/worker: \"\" ptpEventConfig: apiVersion: \"2.0\" 1 enableEventPublisher: true 2",
"oc apply -f ptp-operatorconfig.yaml",
"spec: profile: - name: \"profile1\" interface: \"enp5s0f0\" ptp4lOpts: \"-2 -s --summary_interval -4\" 1 phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" 2 ptp4lConf: \"\" 3 ptpClockThreshold: 4 holdOverTimeout: 5 maxOffsetThreshold: 100 minOffsetThreshold: -100",
"func server() { http.HandleFunc(\"/event\", getEvent) http.ListenAndServe(\":9043\", nil) } func getEvent(w http.ResponseWriter, req *http.Request) { defer req.Body.Close() bodyBytes, err := io.ReadAll(req.Body) if err != nil { log.Errorf(\"error reading event %v\", err) } e := string(bodyBytes) if e != \"\" { processEvent(bodyBytes) log.Infof(\"received event %s\", string(bodyBytes)) } w.WriteHeader(http.StatusNoContent) }",
"import ( \"github.com/redhat-cne/sdk-go/pkg/pubsub\" \"github.com/redhat-cne/sdk-go/pkg/types\" v1pubsub \"github.com/redhat-cne/sdk-go/v1/pubsub\" ) // Subscribe to PTP events using v2 REST API s1,_:=createsubscription(\"/cluster/node/<node_name>/sync/sync-status/sync-state\") s2,_:=createsubscription(\"/cluster/node/<node_name>/sync/ptp-status/lock-state\") s3,_:=createsubscription(\"/cluster/node/<node_name>/sync/gnss-status/gnss-sync-status\") s4,_:=createsubscription(\"/cluster/node/<node_name>/sync/sync-status/os-clock-sync-state\") s5,_:=createsubscription(\"/cluster/node/<node_name>/sync/ptp-status/clock-class\") // Create PTP event subscriptions POST func createSubscription(resourceAddress string) (sub pubsub.PubSub, err error) { var status int apiPath := \"/api/ocloudNotifications/v2/\" localAPIAddr := \"consumer-events-subscription-service.cloud-events.svc.cluster.local:9043\" // vDU service API address apiAddr := \"ptp-event-publisher-service-<node_name>.openshift-ptp.svc.cluster.local:9043\" 1 apiVersion := \"2.0\" subURL := &types.URI{URL: url.URL{Scheme: \"http\", Host: apiAddr Path: fmt.Sprintf(\"%s%s\", apiPath, \"subscriptions\")}} endpointURL := &types.URI{URL: url.URL{Scheme: \"http\", Host: localAPIAddr, Path: \"event\"}} sub = v1pubsub.NewPubSub(endpointURL, resourceAddress, apiVersion) var subB []byte if subB, err = json.Marshal(&sub); err == nil { rc := restclient.New() if status, subB = rc.PostWithReturn(subURL, subB); status != http.StatusCreated { err = fmt.Errorf(\"error in subscription creation api at %s, returned status %d\", subURL, status) } else { err = json.Unmarshal(subB, &sub) } } else { err = fmt.Errorf(\"failed to marshal subscription for %s\", resourceAddress) } return }",
"//Get PTP event state for the resource func getCurrentState(resource string) { //Create publisher url := &types.URI{URL: url.URL{Scheme: \"http\", Host: \"ptp-event-publisher-service-<node_name>.openshift-ptp.svc.cluster.local:9043\", 1 Path: fmt.SPrintf(\"/api/ocloudNotifications/v2/%s/CurrentState\",resource}} rc := restclient.New() status, event := rc.Get(url) if status != http.StatusOK { log.Errorf(\"CurrentState:error %d from url %s, %s\", status, url.String(), event) } else { log.Debugf(\"Got CurrentState: %s \", event) } }",
"apiVersion: v1 kind: Namespace metadata: name: cloud-events labels: security.openshift.io/scc.podSecurityLabelSync: \"false\" pod-security.kubernetes.io/audit: \"privileged\" pod-security.kubernetes.io/enforce: \"privileged\" pod-security.kubernetes.io/warn: \"privileged\" name: cloud-events openshift.io/cluster-monitoring: \"true\" annotations: workload.openshift.io/allowed: management",
"apiVersion: apps/v1 kind: Deployment metadata: name: cloud-consumer-deployment namespace: cloud-events labels: app: consumer spec: replicas: 1 selector: matchLabels: app: consumer template: metadata: annotations: target.workload.openshift.io/management: '{\"effect\": \"PreferredDuringScheduling\"}' labels: app: consumer spec: nodeSelector: node-role.kubernetes.io/worker: \"\" serviceAccountName: consumer-sa containers: - name: cloud-event-consumer image: cloud-event-consumer imagePullPolicy: Always args: - \"--local-api-addr=consumer-events-subscription-service.cloud-events.svc.cluster.local:9043\" - \"--api-path=/api/ocloudNotifications/v2/\" - \"--api-addr=127.0.0.1:8089\" - \"--api-version=2.0\" - \"--http-event-publishers=ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043\" env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: CONSUMER_TYPE value: \"PTP\" - name: ENABLE_STATUS_CHECK value: \"true\" volumes: - name: pubsubstore emptyDir: {}",
"apiVersion: v1 kind: ServiceAccount metadata: name: consumer-sa namespace: cloud-events",
"apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: \"true\" name: consumer-events-subscription-service namespace: cloud-events labels: app: consumer-service spec: ports: - name: sub-port port: 9043 selector: app: consumer sessionAffinity: None type: ClusterIP",
"oc -n cloud-events logs -f deployment/cloud-consumer-deployment",
"time = \"2024-09-02T13:49:01Z\" level = info msg = \"transport host path is set to ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043\" time = \"2024-09-02T13:49:01Z\" level = info msg = \"apiVersion=2.0, updated apiAddr=ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043, apiPath=/api/ocloudNotifications/v2/\" time = \"2024-09-02T13:49:01Z\" level = info msg = \"Starting local API listening to :9043\" time = \"2024-09-02T13:49:06Z\" level = info msg = \"transport host path is set to ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043\" time = \"2024-09-02T13:49:06Z\" level = info msg = \"checking for rest service health\" time = \"2024-09-02T13:49:06Z\" level = info msg = \"health check http://ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/health\" time = \"2024-09-02T13:49:07Z\" level = info msg = \"rest service returned healthy status\" time = \"2024-09-02T13:49:07Z\" level = info msg = \"healthy publisher; subscribing to events\" time = \"2024-09-02T13:49:07Z\" level = info msg = \"received event {\\\"specversion\\\":\\\"1.0\\\",\\\"id\\\":\\\"ab423275-f65d-4760-97af-5b0b846605e4\\\",\\\"source\\\":\\\"/sync/ptp-status/clock-class\\\",\\\"type\\\":\\\"event.sync.ptp-status.ptp-clock-class-change\\\",\\\"time\\\":\\\"2024-09-02T13:49:07.226494483Z\\\",\\\"data\\\":{\\\"version\\\":\\\"1.0\\\",\\\"values\\\":[{\\\"ResourceAddress\\\":\\\"/cluster/node/compute-1.example.com/ptp-not-set\\\",\\\"data_type\\\":\\\"metric\\\",\\\"value_type\\\":\\\"decimal64.3\\\",\\\"value\\\":\\\"0\\\"}]}}\"",
"oc port-forward -n openshift-ptp ds/linuxptp-daemon 9043:9043",
"Forwarding from 127.0.0.1:9043 -> 9043 Forwarding from [::1]:9043 -> 9043 Handling connection for 9043",
"curl -X GET http://localhost:9043/api/ocloudNotifications/v2/health",
"OK",
"oc debug node/<node_name>",
"sh-4.4# curl http://localhost:9091/metrics",
"HELP cne_api_events_published Metric to get number of events published by the rest api TYPE cne_api_events_published gauge cne_api_events_published{address=\"/cluster/node/compute-1.example.com/sync/gnss-status/gnss-sync-status\",status=\"success\"} 1 cne_api_events_published{address=\"/cluster/node/compute-1.example.com/sync/ptp-status/lock-state\",status=\"success\"} 94 cne_api_events_published{address=\"/cluster/node/compute-1.example.com/sync/ptp-status/class-change\",status=\"success\"} 18 cne_api_events_published{address=\"/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state\",status=\"success\"} 27",
"oc logs -f linuxptp-daemon-cvgr6 -n openshift-ptp -c cloud-event-proxy",
"[ { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state\", \"EndpointUri\": \"http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event\", \"SubscriptionId\": \"ccedbf08-3f96-4839-a0b6-2eb0401855ed\", \"UriLocation\": \"http://ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/subscriptions/ccedbf08-3f96-4839-a0b6-2eb0401855ed\" }, { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/ptp-status/clock-class\", \"EndpointUri\": \"http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event\", \"SubscriptionId\": \"a939a656-1b7d-4071-8cf1-f99af6e931f2\", \"UriLocation\": \"http://ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/subscriptions/a939a656-1b7d-4071-8cf1-f99af6e931f2\" }, { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/ptp-status/lock-state\", \"EndpointUri\": \"http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event\", \"SubscriptionId\": \"ba4564a3-4d9e-46c5-b118-591d3105473c\", \"UriLocation\": \"http://ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/subscriptions/ba4564a3-4d9e-46c5-b118-591d3105473c\" }, { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/gnss-status/gnss-sync-status\", \"EndpointUri\": \"http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event\", \"SubscriptionId\": \"ea0d772e-f00a-4889-98be-51635559b4fb\", \"UriLocation\": \"http://ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/subscriptions/ea0d772e-f00a-4889-98be-51635559b4fb\" }, { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/sync-status/sync-state\", \"EndpointUri\": \"http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event\", \"SubscriptionId\": \"762999bf-b4a0-4bad-abe8-66e646b65754\", \"UriLocation\": \"http://ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/subscriptions/762999bf-b4a0-4bad-abe8-66e646b65754\" } ]",
"{ \"EndpointUri\": \"http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event\", \"ResourceAddress\": \"/cluster/node/{node_name}/sync/sync-status/sync-state\" }",
"{ \"EndpointUri\": \"http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event\", \"ResourceAddress\": \"/cluster/node/{node_name}/sync/ptp-status/lock-state\" }",
"{ \"EndpointUri\": \"http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event\", \"ResourceAddress\": \"/cluster/node/{node_name}/sync/gnss-status/gnss-sync-status\" }",
"{ \"EndpointUri\": \"http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event\", \"ResourceAddress\": \"/cluster/node/{node_name}/sync/sync-status/os-clock-sync-state\" }",
"{ \"EndpointUri\": \"http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event\", \"ResourceAddress\": \"/cluster/node/{node_name}/sync/ptp-status/clock-class\" }",
"{ \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/ptp-status/lock-state\", \"EndpointUri\": \"http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event\", \"SubscriptionId\": \"620283f3-26cd-4a6d-b80a-bdc4b614a96a\", \"UriLocation\": \"http://ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/subscriptions/620283f3-26cd-4a6d-b80a-bdc4b614a96a\" }",
"{ \"status\": \"deleted all subscriptions\" }",
"{ \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/ptp-status/lock-state\", \"EndpointUri\": \"http://consumer-events-subscription-service.cloud-events.svc.cluster.local:9043/event\", \"SubscriptionId\": \"620283f3-26cd-4a6d-b80a-bdc4b614a96a\", \"UriLocation\": \"http://ptp-event-publisher-service-compute-1.openshift-ptp.svc.cluster.local:9043/api/ocloudNotifications/v2/subscriptions/620283f3-26cd-4a6d-b80a-bdc4b614a96a\" }",
"[ { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/sync-status/sync-state\", \"EndpointUri\": \"http://localhost:9043/api/ocloudNotifications/v2/dummy\", \"SubscriptionId\": \"4ea72bfa-185c-4703-9694-cdd0434cd570\", \"UriLocation\": \"http://localhost:9043/api/ocloudNotifications/v2/publishers/4ea72bfa-185c-4703-9694-cdd0434cd570\" }, { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state\", \"EndpointUri\": \"http://localhost:9043/api/ocloudNotifications/v2/dummy\", \"SubscriptionId\": \"71fbb38e-a65d-41fc-823b-d76407901087\", \"UriLocation\": \"http://localhost:9043/api/ocloudNotifications/v2/publishers/71fbb38e-a65d-41fc-823b-d76407901087\" }, { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/ptp-status/clock-class\", \"EndpointUri\": \"http://localhost:9043/api/ocloudNotifications/v2/dummy\", \"SubscriptionId\": \"7bc27cad-03f4-44a9-8060-a029566e7926\", \"UriLocation\": \"http://localhost:9043/api/ocloudNotifications/v2/publishers/7bc27cad-03f4-44a9-8060-a029566e7926\" }, { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/ptp-status/lock-state\", \"EndpointUri\": \"http://localhost:9043/api/ocloudNotifications/v2/dummy\", \"SubscriptionId\": \"6e7b6736-f359-46b9-991c-fbaed25eb554\", \"UriLocation\": \"http://localhost:9043/api/ocloudNotifications/v2/publishers/6e7b6736-f359-46b9-991c-fbaed25eb554\" }, { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/gnss-status/gnss-sync-status\", \"EndpointUri\": \"http://localhost:9043/api/ocloudNotifications/v2/dummy\", \"SubscriptionId\": \"31bb0a45-7892-45d4-91dd-13035b13ed18\", \"UriLocation\": \"http://localhost:9043/api/ocloudNotifications/v2/publishers/31bb0a45-7892-45d4-91dd-13035b13ed18\" } ]",
"{ \"id\": \"c1ac3aa5-1195-4786-84f8-da0ea4462921\", \"type\": \"event.sync.ptp-status.ptp-state-change\", \"source\": \"/cluster/node/compute-1.example.com/sync/ptp-status/lock-state\", \"dataContentType\": \"application/json\", \"time\": \"2023-01-10T02:41:57.094981478Z\", \"data\": { \"version\": \"1.0\", \"values\": [ { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/ens5fx/master\", \"dataType\": \"notification\", \"valueType\": \"enumeration\", \"value\": \"LOCKED\" }, { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/ens5fx/master\", \"dataType\": \"metric\", \"valueType\": \"decimal64.3\", \"value\": \"29\" } ] } }",
"{ \"specversion\": \"0.3\", \"id\": \"4f51fe99-feaa-4e66-9112-66c5c9b9afcb\", \"source\": \"/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state\", \"type\": \"event.sync.sync-status.os-clock-sync-state-change\", \"subject\": \"/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state\", \"datacontenttype\": \"application/json\", \"time\": \"2022-11-29T17:44:22.202Z\", \"data\": { \"version\": \"1.0\", \"values\": [ { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/CLOCK_REALTIME\", \"dataType\": \"notification\", \"valueType\": \"enumeration\", \"value\": \"LOCKED\" }, { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/CLOCK_REALTIME\", \"dataType\": \"metric\", \"valueType\": \"decimal64.3\", \"value\": \"27\" } ] } }",
"{ \"id\": \"064c9e67-5ad4-4afb-98ff-189c6aa9c205\", \"type\": \"event.sync.ptp-status.ptp-clock-class-change\", \"source\": \"/cluster/node/compute-1.example.com/sync/ptp-status/clock-class\", \"dataContentType\": \"application/json\", \"time\": \"2023-01-10T02:41:56.785673989Z\", \"data\": { \"version\": \"1.0\", \"values\": [ { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/ens5fx/master\", \"dataType\": \"metric\", \"valueType\": \"decimal64.3\", \"value\": \"165\" } ] } }",
"{ \"specversion\": \"0.3\", \"id\": \"8c9d6ecb-ae9f-4106-82c4-0a778a79838d\", \"source\": \"/sync/sync-status/sync-state\", \"type\": \"event.sync.sync-status.synchronization-state-change\", \"subject\": \"/cluster/node/compute-1.example.com/sync/sync-status/sync-state\", \"datacontenttype\": \"application/json\", \"time\": \"2024-08-28T14:50:57.327585316Z\", \"data\": { \"version\": \"1.0\", \"values\": [ { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/sync-status/sync-state\", \"data_type\": \"notification\", \"value_type\": \"enumeration\", \"value\": \"LOCKED\" }] } }",
"{ \"id\": \"435e1f2a-6854-4555-8520-767325c087d7\", \"type\": \"event.sync.gnss-status.gnss-state-change\", \"source\": \"/cluster/node/compute-1.example.com/sync/gnss-status/gnss-sync-status\", \"dataContentType\": \"application/json\", \"time\": \"2023-09-27T19:35:33.42347206Z\", \"data\": { \"version\": \"1.0\", \"values\": [ { \"resource\": \"/cluster/node/compute-1.example.com/ens2fx/master\", \"dataType\": \"notification\", \"valueType\": \"enumeration\", \"value\": \"LOCKED\" }, { \"resource\": \"/cluster/node/compute-1.example.com/ens2fx/master\", \"dataType\": \"metric\", \"valueType\": \"decimal64.3\", \"value\": \"5\" } ] } }",
"apiVersion: ptp.openshift.io/v1 kind: PtpOperatorConfig metadata: name: default namespace: openshift-ptp spec: daemonNodeSelector: node-role.kubernetes.io/worker: \"\" ptpEventConfig: enableEventPublisher: true 1",
"oc apply -f ptp-operatorconfig.yaml",
"spec: profile: - name: \"profile1\" interface: \"enp5s0f0\" ptp4lOpts: \"-2 -s --summary_interval -4\" 1 phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" 2 ptp4lConf: \"\" 3 ptpClockThreshold: 4 holdOverTimeout: 5 maxOffsetThreshold: 100 minOffsetThreshold: -100",
"func server() { http.HandleFunc(\"/event\", getEvent) http.ListenAndServe(\"localhost:8989\", nil) } func getEvent(w http.ResponseWriter, req *http.Request) { defer req.Body.Close() bodyBytes, err := io.ReadAll(req.Body) if err != nil { log.Errorf(\"error reading event %v\", err) } e := string(bodyBytes) if e != \"\" { processEvent(bodyBytes) log.Infof(\"received event %s\", string(bodyBytes)) } else { w.WriteHeader(http.StatusNoContent) } }",
"import ( \"github.com/redhat-cne/sdk-go/pkg/pubsub\" \"github.com/redhat-cne/sdk-go/pkg/types\" v1pubsub \"github.com/redhat-cne/sdk-go/v1/pubsub\" ) // Subscribe to PTP events using REST API s1,_:=createsubscription(\"/cluster/node/<node_name>/sync/sync-status/os-clock-sync-state\") 1 s2,_:=createsubscription(\"/cluster/node/<node_name>/sync/ptp-status/class-change\") s3,_:=createsubscription(\"/cluster/node/<node_name>/sync/ptp-status/lock-state\") // Create PTP event subscriptions POST func createSubscription(resourceAddress string) (sub pubsub.PubSub, err error) { var status int apiPath:= \"/api/ocloudNotifications/v1/\" localAPIAddr:=localhost:8989 // vDU service API address apiAddr:= \"localhost:8089\" // event framework API address subURL := &types.URI{URL: url.URL{Scheme: \"http\", Host: apiAddr Path: fmt.Sprintf(\"%s%s\", apiPath, \"subscriptions\")}} endpointURL := &types.URI{URL: url.URL{Scheme: \"http\", Host: localAPIAddr, Path: \"event\"}} sub = v1pubsub.NewPubSub(endpointURL, resourceAddress) var subB []byte if subB, err = json.Marshal(&sub); err == nil { rc := restclient.New() if status, subB = rc.PostWithReturn(subURL, subB); status != http.StatusCreated { err = fmt.Errorf(\"error in subscription creation api at %s, returned status %d\", subURL, status) } else { err = json.Unmarshal(subB, &sub) } } else { err = fmt.Errorf(\"failed to marshal subscription for %s\", resourceAddress) } return }",
"//Get PTP event state for the resource func getCurrentState(resource string) { //Create publisher url := &types.URI{URL: url.URL{Scheme: \"http\", Host: localhost:8989, Path: fmt.SPrintf(\"/api/ocloudNotifications/v1/%s/CurrentState\",resource}} rc := restclient.New() status, event := rc.Get(url) if status != http.StatusOK { log.Errorf(\"CurrentState:error %d from url %s, %s\", status, url.String(), event) } else { log.Debugf(\"Got CurrentState: %s \", event) } }",
"apiVersion: apps/v1 kind: Deployment metadata: name: event-consumer-deployment namespace: <namespace> labels: app: consumer spec: replicas: 1 selector: matchLabels: app: consumer template: metadata: labels: app: consumer spec: serviceAccountName: sidecar-consumer-sa containers: - name: event-subscriber image: event-subscriber-app - name: cloud-event-proxy-as-sidecar image: openshift4/ose-cloud-event-proxy args: - \"--metrics-addr=127.0.0.1:9091\" - \"--store-path=/store\" - \"--transport-host=consumer-events-subscription-service.cloud-events.svc.cluster.local:9043\" - \"--http-event-publishers=ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043\" - \"--api-port=8089\" env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: NODE_IP valueFrom: fieldRef: fieldPath: status.hostIP volumeMounts: - name: pubsubstore mountPath: /store ports: - name: metrics-port containerPort: 9091 - name: sub-port containerPort: 9043 volumes: - name: pubsubstore emptyDir: {}",
"apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: \"true\" service.alpha.openshift.io/serving-cert-secret-name: sidecar-consumer-secret name: consumer-events-subscription-service namespace: cloud-events labels: app: consumer-service spec: ports: - name: sub-port port: 9043 selector: app: consumer clusterIP: None sessionAffinity: None type: ClusterIP",
"oc get pods -n openshift-ptp",
"NAME READY STATUS RESTARTS AGE linuxptp-daemon-2t78p 3/3 Running 0 8h linuxptp-daemon-k8n88 3/3 Running 0 8h",
"oc exec -it <linuxptp-daemon> -n openshift-ptp -c cloud-event-proxy -- curl 127.0.0.1:9091/metrics",
"HELP cne_transport_connections_resets Metric to get number of connection resets TYPE cne_transport_connections_resets gauge cne_transport_connection_reset 1 HELP cne_transport_receiver Metric to get number of receiver created TYPE cne_transport_receiver gauge cne_transport_receiver{address=\"/cluster/node/compute-1.example.com/ptp\",status=\"active\"} 2 cne_transport_receiver{address=\"/cluster/node/compute-1.example.com/redfish/event\",status=\"active\"} 2 HELP cne_transport_sender Metric to get number of sender created TYPE cne_transport_sender gauge cne_transport_sender{address=\"/cluster/node/compute-1.example.com/ptp\",status=\"active\"} 1 cne_transport_sender{address=\"/cluster/node/compute-1.example.com/redfish/event\",status=\"active\"} 1 HELP cne_events_ack Metric to get number of events produced TYPE cne_events_ack gauge cne_events_ack{status=\"success\",type=\"/cluster/node/compute-1.example.com/ptp\"} 18 cne_events_ack{status=\"success\",type=\"/cluster/node/compute-1.example.com/redfish/event\"} 18 HELP cne_events_transport_published Metric to get number of events published by the transport TYPE cne_events_transport_published gauge cne_events_transport_published{address=\"/cluster/node/compute-1.example.com/ptp\",status=\"failed\"} 1 cne_events_transport_published{address=\"/cluster/node/compute-1.example.com/ptp\",status=\"success\"} 18 cne_events_transport_published{address=\"/cluster/node/compute-1.example.com/redfish/event\",status=\"failed\"} 1 cne_events_transport_published{address=\"/cluster/node/compute-1.example.com/redfish/event\",status=\"success\"} 18 HELP cne_events_transport_received Metric to get number of events received by the transport TYPE cne_events_transport_received gauge cne_events_transport_received{address=\"/cluster/node/compute-1.example.com/ptp\",status=\"success\"} 18 cne_events_transport_received{address=\"/cluster/node/compute-1.example.com/redfish/event\",status=\"success\"} 18 HELP cne_events_api_published Metric to get number of events published by the rest api TYPE cne_events_api_published gauge cne_events_api_published{address=\"/cluster/node/compute-1.example.com/ptp\",status=\"success\"} 19 cne_events_api_published{address=\"/cluster/node/compute-1.example.com/redfish/event\",status=\"success\"} 19 HELP cne_events_received Metric to get number of events received TYPE cne_events_received gauge cne_events_received{status=\"success\",type=\"/cluster/node/compute-1.example.com/ptp\"} 18 cne_events_received{status=\"success\",type=\"/cluster/node/compute-1.example.com/redfish/event\"} 18 HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served. TYPE promhttp_metric_handler_requests_in_flight gauge promhttp_metric_handler_requests_in_flight 1 HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code. TYPE promhttp_metric_handler_requests_total counter promhttp_metric_handler_requests_total{code=\"200\"} 4 promhttp_metric_handler_requests_total{code=\"500\"} 0 promhttp_metric_handler_requests_total{code=\"503\"} 0",
"oc debug node/<node_name>",
"sh-4.4# curl http://localhost:9091/metrics",
"HELP cne_api_events_published Metric to get number of events published by the rest api TYPE cne_api_events_published gauge cne_api_events_published{address=\"/cluster/node/compute-1.example.com/sync/gnss-status/gnss-sync-status\",status=\"success\"} 1 cne_api_events_published{address=\"/cluster/node/compute-1.example.com/sync/ptp-status/lock-state\",status=\"success\"} 94 cne_api_events_published{address=\"/cluster/node/compute-1.example.com/sync/ptp-status/class-change\",status=\"success\"} 18 cne_api_events_published{address=\"/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state\",status=\"success\"} 27",
"oc logs -f linuxptp-daemon-cvgr6 -n openshift-ptp -c cloud-event-proxy",
"[ { \"id\": \"75b1ad8f-c807-4c23-acf5-56f4b7ee3826\", \"endpointUri\": \"http://localhost:9089/event\", \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/75b1ad8f-c807-4c23-acf5-56f4b7ee3826\", \"resource\": \"/cluster/node/compute-1.example.com/ptp\" } ]",
"{ \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions\", \"resource\": \"/cluster/node/compute-1.example.com/ptp\" }",
"{ \"endpointUri\": \"http://localhost:8989/event\", \"resource\": \"/cluster/node/{node_name}/sync/ptp-status/lock-state\" }",
"{ \"endpointUri\": \"http://localhost:8989/event\", \"resource\": \"/cluster/node/{node_name}/sync/sync-status/os-clock-sync-state\" }",
"{ \"endpointUri\": \"http://localhost:8989/event\", \"resource\": \"/cluster/node/{node_name}/sync/ptp-status/clock-class\" }",
"{ \"endpointUri\": \"http://localhost:8989/event\", \"resource\": \"/cluster/node/{node_name}/sync/gnss-status/gnss-sync-status\" }",
"{ \"endpointUri\": \"http://localhost:8989/event\", \"resource\": \"/cluster/node/{node_name}/sync/sync-status/sync-state\" }",
"{ \"status\": \"deleted all subscriptions\" }",
"{ \"id\":\"48210fb3-45be-4ce0-aa9b-41a0e58730ab\", \"endpointUri\": \"http://localhost:9089/event\", \"uriLocation\":\"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/48210fb3-45be-4ce0-aa9b-41a0e58730ab\", \"resource\":\"/cluster/node/compute-1.example.com/ptp\" }",
"{ \"status\": \"OK\" }",
"OK",
"[ { \"id\": \"0fa415ae-a3cf-4299-876a-589438bacf75\", \"endpointUri\": \"http://localhost:9085/api/ocloudNotifications/v1/dummy\", \"uriLocation\": \"http://localhost:9085/api/ocloudNotifications/v1/publishers/0fa415ae-a3cf-4299-876a-589438bacf75\", \"resource\": \"/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state\" }, { \"id\": \"28cd82df-8436-4f50-bbd9-7a9742828a71\", \"endpointUri\": \"http://localhost:9085/api/ocloudNotifications/v1/dummy\", \"uriLocation\": \"http://localhost:9085/api/ocloudNotifications/v1/publishers/28cd82df-8436-4f50-bbd9-7a9742828a71\", \"resource\": \"/cluster/node/compute-1.example.com/sync/ptp-status/clock-class\" }, { \"id\": \"44aa480d-7347-48b0-a5b0-e0af01fa9677\", \"endpointUri\": \"http://localhost:9085/api/ocloudNotifications/v1/dummy\", \"uriLocation\": \"http://localhost:9085/api/ocloudNotifications/v1/publishers/44aa480d-7347-48b0-a5b0-e0af01fa9677\", \"resource\": \"/cluster/node/compute-1.example.com/sync/ptp-status/lock-state\" }, { \"id\": \"778da345d-4567-67b0-a43f0-rty885a456\", \"endpointUri\": \"http://localhost:9085/api/ocloudNotifications/v1/dummy\", \"uriLocation\": \"http://localhost:9085/api/ocloudNotifications/v1/publishers/778da345d-4567-67b0-a43f0-rty885a456\", \"resource\": \"/cluster/node/compute-1.example.com/sync/gnss-status/gnss-sync-status\" } ]",
"{ \"id\": \"c1ac3aa5-1195-4786-84f8-da0ea4462921\", \"type\": \"event.sync.ptp-status.ptp-state-change\", \"source\": \"/cluster/node/compute-1.example.com/sync/ptp-status/lock-state\", \"dataContentType\": \"application/json\", \"time\": \"2023-01-10T02:41:57.094981478Z\", \"data\": { \"version\": \"1.0\", \"values\": [ { \"resource\": \"/cluster/node/compute-1.example.com/ens5fx/master\", \"dataType\": \"notification\", \"valueType\": \"enumeration\", \"value\": \"LOCKED\" }, { \"resource\": \"/cluster/node/compute-1.example.com/ens5fx/master\", \"dataType\": \"metric\", \"valueType\": \"decimal64.3\", \"value\": \"29\" } ] } }",
"{ \"specversion\": \"0.3\", \"id\": \"4f51fe99-feaa-4e66-9112-66c5c9b9afcb\", \"source\": \"/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state\", \"type\": \"event.sync.sync-status.os-clock-sync-state-change\", \"subject\": \"/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state\", \"datacontenttype\": \"application/json\", \"time\": \"2022-11-29T17:44:22.202Z\", \"data\": { \"version\": \"1.0\", \"values\": [ { \"resource\": \"/cluster/node/compute-1.example.com/CLOCK_REALTIME\", \"dataType\": \"notification\", \"valueType\": \"enumeration\", \"value\": \"LOCKED\" }, { \"resource\": \"/cluster/node/compute-1.example.com/CLOCK_REALTIME\", \"dataType\": \"metric\", \"valueType\": \"decimal64.3\", \"value\": \"27\" } ] } }",
"{ \"id\": \"064c9e67-5ad4-4afb-98ff-189c6aa9c205\", \"type\": \"event.sync.ptp-status.ptp-clock-class-change\", \"source\": \"/cluster/node/compute-1.example.com/sync/ptp-status/clock-class\", \"dataContentType\": \"application/json\", \"time\": \"2023-01-10T02:41:56.785673989Z\", \"data\": { \"version\": \"1.0\", \"values\": [ { \"resource\": \"/cluster/node/compute-1.example.com/ens5fx/master\", \"dataType\": \"metric\", \"valueType\": \"decimal64.3\", \"value\": \"165\" } ] } }",
"{ \"specversion\": \"0.3\", \"id\": \"8c9d6ecb-ae9f-4106-82c4-0a778a79838d\", \"source\": \"/sync/sync-status/sync-state\", \"type\": \"event.sync.sync-status.synchronization-state-change\", \"subject\": \"/cluster/node/compute-1.example.com/sync/sync-status/sync-state\", \"datacontenttype\": \"application/json\", \"time\": \"2024-08-28T14:50:57.327585316Z\", \"data\": { \"version\": \"1.0\", \"values\": [ { \"ResourceAddress\": \"/cluster/node/compute-1.example.com/sync/sync-status/sync-state\", \"data_type\": \"notification\", \"value_type\": \"enumeration\", \"value\": \"LOCKED\" }] } }",
"{ \"id\": \"435e1f2a-6854-4555-8520-767325c087d7\", \"type\": \"event.sync.gnss-status.gnss-state-change\", \"source\": \"/cluster/node/compute-1.example.com/sync/gnss-status/gnss-sync-status\", \"dataContentType\": \"application/json\", \"time\": \"2023-09-27T19:35:33.42347206Z\", \"data\": { \"version\": \"1.0\", \"values\": [ { \"resource\": \"/cluster/node/compute-1.example.com/ens2fx/master\", \"dataType\": \"notification\", \"valueType\": \"enumeration\", \"value\": \"LOCKED\" }, { \"resource\": \"/cluster/node/compute-1.example.com/ens2fx/master\", \"dataType\": \"metric\", \"valueType\": \"decimal64.3\", \"value\": \"5\" } ] } }",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: <cudn_namespace_name> labels: k8s.ovn.org/primary-user-defined-network: \"\" EOF",
"apiVersion: k8s.ovn.org/v1 kind: ClusterUserDefinedNetwork metadata: name: <cudn_name> 1 spec: namespaceSelector: 2 matchLabels: 3 - \"<example_namespace_one>\":\"\" 4 - \"<example_namespace_two>\":\"\" 5 network: 6 topology: Layer2 7 layer2: 8 role: Primary 9 subnets: - \"2001:db8::/64\" - \"10.100.0.0/16\" 10",
"apiVersion: k8s.ovn.org/v1 kind: ClusterUserDefinedNetwork metadata: name: <cudn_name> 1 spec: namespaceSelector: 2 matchExpressions: 3 - key: kubernetes.io/metadata.name 4 operator: In 5 values: [\"<example_namespace_one>\", \"<example_namespace_two>\"] 6 network: 7 topology: Layer3 8 layer3: 9 role: Primary 10 subnets: 11 - cidr: 10.100.0.0/16 hostSubnet: 24",
"oc create --validate=true -f <example_cluster_udn>.yaml",
"oc get clusteruserdefinednetwork <cudn_name> -o yaml",
"apiVersion: k8s.ovn.org/v1 kind: ClusterUserDefinedNetwork metadata: creationTimestamp: \"2024-12-05T15:53:00Z\" finalizers: - k8s.ovn.org/user-defined-network-protection generation: 1 name: my-cudn resourceVersion: \"47985\" uid: 16ee0fcf-74d1-4826-a6b7-25c737c1a634 spec: namespaceSelector: matchExpressions: - key: custom.network.selector operator: In values: - example-namespace-1 - example-namespace-2 - example-namespace-3 network: layer3: role: Primary subnets: - cidr: 10.100.0.0/16 topology: Layer3 status: conditions: - lastTransitionTime: \"2024-11-19T16:46:34Z\" message: 'NetworkAttachmentDefinition has been created in following namespaces: [example-namespace-1, example-namespace-2, example-namespace-3]' reason: NetworkAttachmentDefinitionReady status: \"True\" type: NetworkCreated",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: <udn_namespace_name> labels: k8s.ovn.org/primary-user-defined-network: \"\" EOF",
"apiVersion: k8s.ovn.org/v1 kind: UserDefinedNetwork metadata: name: udn-1 1 namespace: <some_custom_namespace> spec: topology: Layer2 2 layer2: 3 role: Primary 4 subnets: - \"10.0.0.0/24\" - \"2001:db8::/60\" 5",
"apiVersion: k8s.ovn.org/v1 kind: UserDefinedNetwork metadata: name: udn-2-primary 1 namespace: <some_custom_namespace> spec: topology: Layer3 2 layer3: 3 role: Primary 4 subnets: 5 - cidr: 10.150.0.0/16 hostSubnet: 24 - cidr: 2001:db8::/60 hostSubnet: 64",
"oc apply -f <my_layer_two_udn>.yaml",
"oc get userdefinednetworks udn-1 -n <some_custom_namespace> -o yaml",
"apiVersion: k8s.ovn.org/v1 kind: UserDefinedNetwork metadata: creationTimestamp: \"2024-08-28T17:18:47Z\" finalizers: - k8s.ovn.org/user-defined-network-protection generation: 1 name: udn-1 namespace: some-custom-namespace resourceVersion: \"53313\" uid: f483626d-6846-48a1-b88e-6bbeb8bcde8c spec: layer2: role: Primary subnets: - 10.0.0.0/24 - 2001:db8::/60 topology: Layer2 status: conditions: - lastTransitionTime: \"2024-08-28T17:18:47Z\" message: NetworkAttachmentDefinition has been created reason: NetworkAttachmentDefinitionReady status: \"True\" type: NetworkCreated",
"apiVersion: v1 kind: Pod metadata: annotations: k8s.ovn.org/open-default-ports: | - protocol: tcp port: 80 - protocol: udp port: 53",
"openstack subnet set --dns-nameserver 0.0.0.0 <subnet_id>",
"oc create namespace <namespace_name>",
"oc edit networks.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # additionalNetworks: - name: tertiary-net namespace: namespace2 type: Raw rawCNIConfig: |- { \"cniVersion\": \"0.3.1\", \"name\": \"tertiary-net\", \"type\": \"ipvlan\", \"master\": \"eth1\", \"mode\": \"l2\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"192.168.1.23/24\" } ] } }",
"oc get network-attachment-definitions -n <namespace>",
"NAME AGE test-network-1 14m",
"apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: next-net spec: config: |- { \"cniVersion\": \"0.3.1\", \"name\": \"work-network\", \"namespace\": \"namespace2\", 1 \"type\": \"host-device\", \"device\": \"eth1\", \"ipam\": { \"type\": \"dhcp\" } }",
"oc apply -f <file>.yaml",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-same-namespace annotations: k8s.v1.cni.cncf.io/policy-for: blue2 spec: podSelector: ingress: - from: - podSelector: {}",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: ingress-ipblock annotations: k8s.v1.cni.cncf.io/policy-for: default/flatl2net spec: podSelector: matchLabels: name: access-control policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 10.200.0.0/30",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: mapping 1 spec: nodeSelector: node-role.kubernetes.io/worker: '' 2 desiredState: ovn: bridge-mappings: - localnet: localnet1 3 bridge: br-ex 4 state: present 5",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ovs-br1-multiple-networks 1 spec: nodeSelector: node-role.kubernetes.io/worker: '' 2 desiredState: interfaces: - name: ovs-br1 3 description: |- A dedicated OVS bridge with eth1 as a port allowing all VLANs and untagged traffic type: ovs-bridge state: up bridge: allow-extra-patch-ports: true options: stp: false port: - name: eth1 4 ovn: bridge-mappings: - localnet: localnet2 5 bridge: ovs-br1 6 state: present 7",
"{ \"cniVersion\": \"0.3.1\", \"name\": \"ns1-localnet-network\", \"type\": \"ovn-k8s-cni-overlay\", \"topology\":\"localnet\", \"subnets\": \"202.10.130.112/28\", \"vlanID\": 33, \"mtu\": 1500, \"netAttachDefName\": \"ns1/localnet-network\" \"excludeSubnets\": \"10.100.200.0/29\" }",
"{ \"cniVersion\": \"0.3.1\", \"name\": \"l2-network\", \"type\": \"ovn-k8s-cni-overlay\", \"topology\":\"layer2\", \"subnets\": \"10.100.200.0/24\", \"mtu\": 1300, \"netAttachDefName\": \"ns1/l2-network\", \"excludeSubnets\": \"10.100.200.0/29\" }",
"apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: l2-network name: tinypod namespace: ns1 spec: containers: - args: - pause image: k8s.gcr.io/e2e-test-images/agnhost:2.36 imagePullPolicy: IfNotPresent name: agnhost-container",
"apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"l2-network\", 1 \"mac\": \"02:03:04:05:06:07\", 2 \"interface\": \"myiface1\", 3 \"ips\": [ \"192.0.2.20/24\" ] 4 } ]' name: tinypod namespace: ns1 spec: containers: - args: - pause image: k8s.gcr.io/e2e-test-images/agnhost:2.36 imagePullPolicy: IfNotPresent name: agnhost-container",
"bridge vlan add vid VLAN_ID dev DEV",
"{ \"cniVersion\": \"0.3.1\", \"name\": \"bridge-net\", \"type\": \"bridge\", \"isGateway\": true, \"vlan\": 2, \"ipam\": { \"type\": \"dhcp\" } }",
"{ \"cniVersion\": \"0.3.1\", \"name\": \"hostdev-net\", \"type\": \"host-device\", \"device\": \"eth1\" }",
"{ \"name\": \"vlan-net\", \"cniVersion\": \"0.3.1\", \"type\": \"vlan\", \"master\": \"eth0\", \"mtu\": 1500, \"vlanId\": 5, \"linkInContainer\": false, \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.1.1.0/24\" }, \"dns\": { \"nameservers\": [ \"10.1.1.1\", \"8.8.8.8\" ] } }",
"{ \"cniVersion\": \"0.3.1\", \"name\": \"ipvlan-net\", \"type\": \"ipvlan\", \"master\": \"eth1\", \"linkInContainer\": false, \"mode\": \"l3\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"192.168.10.10/24\" } ] } }",
"{ \"cniVersion\": \"0.3.1\", \"name\": \"macvlan-net\", \"type\": \"macvlan\", \"master\": \"eth1\", \"linkInContainer\": false, \"mode\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } }",
"{ \"name\": \"mynet\", \"cniVersion\": \"0.3.1\", \"type\": \"tap\", \"mac\": \"00:11:22:33:44:55\", \"mtu\": 1500, \"selinuxcontext\": \"system_u:system_r:container_t:s0\", \"multiQueue\": true, \"owner\": 0, \"group\": 0 \"bridge\": \"br1\" }",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: setsebool.service contents: | [Unit] Description=Set SELinux boolean for the TAP CNI plugin Before=kubelet.service [Service] Type=oneshot ExecStart=/usr/sbin/setsebool container_use_devices=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target",
"oc apply -f setsebool-container-use-devices.yaml",
"oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-e5e0c8e8be9194e7c5a882e047379cfa True False False 3 3 3 0 7d2h worker rendered-worker-d6c9ca107fba6cd76cdcbfcedcafa0f2 True False False 3 3 3 0 7d",
"metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1",
"metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"<network>\", 1 \"namespace\": \"<namespace>\", 2 \"default-route\": [\"<default-route>\"] 3 } ]",
"oc create -f <name>.yaml",
"oc get pod <name> -o yaml",
"oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/network-status: |- 1 [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.128.2.14\" ], \"default\": true, \"dns\": {} },{ \"name\": \"macvlan-bridge\", \"interface\": \"net1\", \"ips\": [ \"20.2.2.100\" ], \"mac\": \"22:2f:60:a5:f8:00\", \"dns\": {} }] name: example-pod namespace: default spec: status:",
"oc edit pod <name>",
"metadata: annotations: k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' 1",
"apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"net1\" }, { \"name\": \"net2\", 1 \"default-route\": [\"192.0.2.1\"] 2 }]' spec: containers: - name: example-pod command: [\"/bin/bash\", \"-c\", \"sleep 2000000000000\"] image: centos/tools",
"oc exec -it <pod_name> -- ip route",
"oc edit networks.operator.openshift.io cluster",
"name: <name> 1 namespace: <namespace> 2 rawCNIConfig: '{ 3 }' type: Raw",
"{ \"cniVersion\": \"0.3.1\", \"name\": \"<name>\", 1 \"plugins\": [{ 2 \"type\": \"macvlan\", \"capabilities\": { \"ips\": true }, 3 \"master\": \"eth0\", 4 \"mode\": \"bridge\", \"ipam\": { \"type\": \"static\" } }, { \"capabilities\": { \"mac\": true }, 5 \"type\": \"tuning\" }] }",
"oc edit pod <name>",
"apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"<name>\", 1 \"ips\": [ \"192.0.2.205/24\" ], 2 \"mac\": \"CA:FE:C0:FF:EE:00\" 3 } ]'",
"oc exec -it <pod_name> -- ip a",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: annotations: k8s.v1.cni.cncf.io/policy-for: <network_name>",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: useMultiNetworkPolicy: true",
"oc patch network.operator.openshift.io cluster --type=merge --patch-file=multinetwork-enable-patch.yaml",
"network.operator.openshift.io/cluster patched",
"kind: ConfigMap apiVersion: v1 metadata: name: multi-networkpolicy-custom-rules namespace: openshift-multus data: custom-v6-rules.txt: | # accept NDP -p icmpv6 --icmpv6-type neighbor-solicitation -j ACCEPT 1 -p icmpv6 --icmpv6-type neighbor-advertisement -j ACCEPT 2 # accept RA/RS -p icmpv6 --icmpv6-type router-solicitation -j ACCEPT 3 -p icmpv6 --icmpv6-type router-advertisement -j ACCEPT 4",
"touch <policy_name>.yaml",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default annotations: k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name> spec: podSelector: {} policyTypes: - Ingress ingress: []",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-same-namespace annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: - from: - podSelector: {}",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-traffic-pod annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: api-allow annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: bookstore role: api ingress: - from: - podSelector: matchLabels: app: bookstore",
"oc apply -f <policy_name>.yaml -n <namespace>",
"multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created",
"oc get multi-networkpolicy",
"oc apply -n <namespace> -f <policy_file>.yaml",
"oc edit multi-networkpolicy <policy_name> -n <namespace>",
"oc describe multi-networkpolicy <policy_name> -n <namespace>",
"oc get multi-networkpolicy",
"oc describe multi-networkpolicy <policy_name> -n <namespace>",
"oc delete multi-networkpolicy <policy_name> -n <namespace>",
"multinetworkpolicy.k8s.cni.cncf.io/default-deny deleted",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default namespace: default 1 annotations: k8s.v1.cni.cncf.io/policy-for: <namespace_name>/<network_name> 2 spec: podSelector: {} 3 policyTypes: 4 - Ingress 5 ingress: [] 6",
"oc apply -f deny-by-default.yaml",
"multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-external namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {}",
"oc apply -f web-allow-external.yaml",
"multinetworkpolicy.k8s.cni.cncf.io/web-allow-external created",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-all-namespaces namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2",
"oc apply -f web-allow-all-namespaces.yaml",
"multinetworkpolicy.k8s.cni.cncf.io/web-allow-all-namespaces created",
"oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80",
"oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh",
"wget -qO- --timeout=2 http://web.default",
"<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-prod namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2",
"oc apply -f web-allow-prod.yaml",
"multinetworkpolicy.k8s.cni.cncf.io/web-allow-prod created",
"oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80",
"oc create namespace prod",
"oc label namespace/prod purpose=production",
"oc create namespace dev",
"oc label namespace/dev purpose=testing",
"oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh",
"wget -qO- --timeout=2 http://web.default",
"wget: download timed out",
"oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh",
"wget -qO- --timeout=2 http://web.default",
"<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>",
"oc delete pod <name> -n <namespace>",
"oc edit networks.operator.openshift.io cluster",
"oc get network-attachment-definitions <network-name> -o yaml",
"oc get network-attachment-definitions net1 -o go-template='{{printf \"%s\\n\" .spec.config}}' { \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens5\", \"mode\": \"bridge\", \"ipam\": {\"type\":\"static\",\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.128.2.1\"}],\"addresses\":[{\"address\":\"10.128.2.100/23\",\"gateway\":\"10.128.2.1\"}],\"dns\":{\"nameservers\":[\"172.30.0.10\"],\"domain\":\"us-west-2.compute.internal\",\"search\":[\"us-west-2.compute.internal\"]}} }",
"{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #",
"{ \"ipam\": { \"type\": \"dhcp\" } }",
"{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }",
"{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/29\", \"network_name\": \"example_net_common\", 1 } }",
"{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/24\", \"network_name\": \"example_net_common\", 1 } }",
"oc edit network.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default rawCNIConfig: |- { \"name\": \"whereabouts-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\" } } type: Raw",
"oc get all -n openshift-multus | grep whereabouts-reconciler",
"pod/whereabouts-reconciler-jnp6g 1/1 Running 0 6s pod/whereabouts-reconciler-k76gg 1/1 Running 0 6s pod/whereabouts-reconciler-k86t9 1/1 Running 0 6s pod/whereabouts-reconciler-p4sxw 1/1 Running 0 6s pod/whereabouts-reconciler-rvfdv 1/1 Running 0 6s pod/whereabouts-reconciler-svzw9 1/1 Running 0 6s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 6s",
"oc create configmap whereabouts-config -n openshift-multus --from-literal=reconciler_cron_expression=\"*/15 * * * *\"",
"oc get all -n openshift-multus | grep whereabouts-reconciler",
"pod/whereabouts-reconciler-2p7hw 1/1 Running 0 4m14s pod/whereabouts-reconciler-76jk7 1/1 Running 0 4m14s pod/whereabouts-reconciler-94zw6 1/1 Running 0 4m14s pod/whereabouts-reconciler-mfh68 1/1 Running 0 4m14s pod/whereabouts-reconciler-pgshz 1/1 Running 0 4m14s pod/whereabouts-reconciler-xn5xz 1/1 Running 0 4m14s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 4m16s",
"oc -n openshift-multus logs whereabouts-reconciler-2p7hw",
"2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/..2024_02_02_16_33_54.1375928161\": CREATE 2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/..2024_02_02_16_33_54.1375928161\": CHMOD 2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/..data_tmp\": RENAME 2024-02-02T16:33:54Z [verbose] using expression: */15 * * * * 2024-02-02T16:33:54Z [verbose] configuration updated to file \"/cron-schedule/..data\". New cron expression: */15 * * * * 2024-02-02T16:33:54Z [verbose] successfully updated CRON configuration id \"00c2d1c9-631d-403f-bb86-73ad104a6817\" - new cron expression: */15 * * * * 2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/config\": CREATE 2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/..2024_02_02_16_26_17.3874177937\": REMOVE 2024-02-02T16:45:00Z [verbose] starting reconciler run 2024-02-02T16:45:00Z [debug] NewReconcileLooper - inferred connection data 2024-02-02T16:45:00Z [debug] listing IP pools 2024-02-02T16:45:00Z [debug] no IP addresses to cleanup 2024-02-02T16:45:00Z [verbose] reconciler success",
"cniVersion: operator.openshift.io/v1 kind: Network =metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"whereabouts-dual-stack\", \"cniVersion\": \"0.3.1, \"type\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"ipRanges\": [ {\"range\": \"192.168.10.0/24\"}, {\"range\": \"2001:db8::/64\"} ] } }",
"oc exec -it mypod -- ip a",
"oc new-project test-namespace",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnic namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: false needVhostNet: true nicSelector: vendor: \"15b3\" 1 deviceID: \"101b\" 2 rootDevices: [\"00:05.0\"] numVfs: 10 priority: 99 resourceName: sriovnic nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\"",
"oc apply -f sriov-node-network-policy.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-network namespace: openshift-sriov-network-operator spec: networkNamespace: test-namespace resourceName: sriovnic spoofChk: \"off\" trust: \"on\"",
"oc apply -f sriov-network-attachment.yaml",
"apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: vlan-100 namespace: test-namespace spec: config: | { \"cniVersion\": \"0.4.0\", \"name\": \"vlan-100\", \"plugins\": [ { \"type\": \"vlan\", \"master\": \"ext0\", 1 \"mtu\": 1500, \"vlanId\": 100, \"linkInContainer\": true, 2 \"ipam\": {\"type\": \"whereabouts\", \"ipRanges\": [{\"range\": \"1.1.1.0/24\"}]} } ] }",
"oc apply -f vlan100-additional-network-configuration.yaml",
"apiVersion: v1 kind: Namespace metadata: name: test-namespace labels: pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged security.openshift.io/scc.podSecurityLabelSync: \"false\" --- apiVersion: v1 kind: Pod metadata: name: nginx-pod namespace: test-namespace annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"sriov-network\", \"namespace\": \"test-namespace\", \"interface\": \"ext0\" 1 }, { \"name\": \"vlan-100\", \"namespace\": \"test-namespace\", \"interface\": \"ext0.100\" } ]' spec: securityContext: runAsNonRoot: true containers: - name: nginx-container image: nginxinc/nginx-unprivileged:latest securityContext: allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] ports: - containerPort: 80 seccompProfile: type: \"RuntimeDefault\"",
"oc apply -f pod-a.yaml",
"oc describe pods nginx-pod -n test-namespace",
"Name: nginx-pod Namespace: test-namespace Priority: 0 Node: worker-1/10.46.186.105 Start Time: Mon, 14 Aug 2023 16:23:13 -0400 Labels: <none> Annotations: k8s.ovn.org/pod-networks: {\"default\":{\"ip_addresses\":[\"10.131.0.26/23\"],\"mac_address\":\"0a:58:0a:83:00:1a\",\"gateway_ips\":[\"10.131.0.1\"],\"routes\":[{\"dest\":\"10.128.0.0 k8s.v1.cni.cncf.io/network-status: [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.26\" ], \"mac\": \"0a:58:0a:83:00:1a\", \"default\": true, \"dns\": {} },{ \"name\": \"test-namespace/sriov-network\", \"interface\": \"ext0\", \"mac\": \"6e:a7:5e:3f:49:1b\", \"dns\": {}, \"device-info\": { \"type\": \"pci\", \"version\": \"1.0.0\", \"pci\": { \"pci-address\": \"0000:d8:00.2\" } } },{ \"name\": \"test-namespace/vlan-100\", \"interface\": \"ext0.100\", \"ips\": [ \"1.1.1.1\" ], \"mac\": \"6e:a7:5e:3f:49:1b\", \"dns\": {} }] k8s.v1.cni.cncf.io/networks: [ { \"name\": \"sriov-network\", \"namespace\": \"test-namespace\", \"interface\": \"ext0\" }, { \"name\": \"vlan-100\", \"namespace\": \"test-namespace\", \"i openshift.io/scc: privileged Status: Running IP: 10.131.0.26 IPs: IP: 10.131.0.26",
"oc new-project test-namespace",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: bridge-network spec: config: '{ \"cniVersion\": \"0.4.0\", \"name\": \"bridge-network\", \"type\": \"bridge\", \"bridge\": \"br-001\", \"isGateway\": true, \"ipMasq\": true, \"hairpinMode\": true, \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.0.0.0/24\", \"routes\": [{\"dst\": \"0.0.0.0/0\"}] } }'",
"oc apply -f bridge-nad.yaml",
"oc get network-attachment-definitions",
"NAME AGE bridge-network 15s",
"apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ipvlan-net namespace: test-namespace spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"ipvlan-net\", \"type\": \"ipvlan\", \"master\": \"ext0\", 1 \"mode\": \"l3\", \"linkInContainer\": true, 2 \"ipam\": {\"type\": \"whereabouts\", \"ipRanges\": [{\"range\": \"10.0.0.0/24\"}]} }'",
"oc apply -f ipvlan-additional-network-configuration.yaml",
"oc get network-attachment-definitions",
"NAME AGE bridge-network 87s ipvlan-net 9s",
"apiVersion: v1 kind: Pod metadata: name: pod-a namespace: test-namespace annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"bridge-network\", \"interface\": \"ext0\" 1 }, { \"name\": \"ipvlan-net\", \"interface\": \"ext1\" } ]' spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-pod image: quay.io/openshifttest/hello-sdn@sha256:c89445416459e7adea9a5a416b3365ed3d74f2491beb904d61dc8d1eb89a72a4 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]",
"oc apply -f pod-a.yaml",
"oc get pod -n test-namespace",
"NAME READY STATUS RESTARTS AGE pod-a 1/1 Running 0 2m36s",
"oc exec -n test-namespace pod-a -- ip a",
"1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: eth0@if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default link/ether 0a:58:0a:d9:00:5d brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.217.0.93/23 brd 10.217.1.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::488b:91ff:fe84:a94b/64 scope link valid_lft forever preferred_lft forever 4: ext0@if107: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether be:da:bd:7e:f4:37 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.0.0.2/24 brd 10.0.0.255 scope global ext0 valid_lft forever preferred_lft forever inet6 fe80::bcda:bdff:fe7e:f437/64 scope link valid_lft forever preferred_lft forever 5: ext1@ext0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default link/ether be:da:bd:7e:f4:37 brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 brd 10.0.0.255 scope global ext1 valid_lft forever preferred_lft forever inet6 fe80::beda:bd00:17e:f437/64 scope link valid_lft forever preferred_lft forever",
"oc edit networks.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: [] 1",
"oc get network-attachment-definition --all-namespaces",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: test-network-1 namespace: additional-network-1 type: Raw rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"macvlan-vrf\", \"plugins\": [ 1 { \"type\": \"macvlan\", \"master\": \"eth1\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.23/24\" } ] } }, { \"type\": \"vrf\", 2 \"vrfname\": \"vrf-1\", 3 \"table\": 1001 4 }] }'",
"oc create -f additional-network-attachment.yaml",
"oc get network-attachment-definitions -n <namespace>",
"NAME AGE additional-network-1 14m",
"apiVersion: v1 kind: Pod metadata: name: pod-additional-net annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"test-network-1\" 1 } ]' spec: containers: - name: example-pod-1 command: [\"/bin/bash\", \"-c\", \"sleep 9000000\"] image: centos:8",
"oc create -f pod-additional-net.yaml",
"pod/test-pod created",
"ip vrf show",
"Name Table ----------------------- vrf-1 1001",
"ip link",
"5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode",
"oc label node <node_name> feature.node.kubernetes.io/network-sriov.capable=\"true\"",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 needVhostNet: false 7 numVfs: <num> 8 externallyManaged: false 9 nicSelector: 10 vendor: \"<vendor_code>\" 11 deviceID: \"<device_id>\" 12 pfNames: [\"<pf_name>\", ...] 13 rootDevices: [\"<pci_bus_id>\", ...] 14 netFilter: \"<filter_string>\" 15 deviceType: <device_type> 16 isRdma: false 17 linkType: <link_type> 18 eSwitchMode: \"switchdev\" 19 excludeTopology: false 20",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> namespace: openshift-sriov-network-operator spec: resourceName: <sriov_resource_name> nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: <num> nicSelector: vendor: \"<vendor_code>\" deviceID: \"<device_id>\" rootDevices: - \"<pci_bus_id>\" linkType: <link_type> isRdma: true",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> namespace: openshift-sriov-network-operator spec: resourceName: <sriov_resource_name> nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 1 1 nicSelector: vendor: \"<vendor_code>\" deviceID: \"<device_id>\" netFilter: \"openstack/NetworkID:ea24bd04-8674-4f69-b0ee-fa0b3bd20509\" 2",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState metadata: name: node-25 1 namespace: openshift-sriov-network-operator ownerReferences: - apiVersion: sriovnetwork.openshift.io/v1 blockOwnerDeletion: true controller: true kind: SriovNetworkNodePolicy name: default spec: dpConfigVersion: \"39824\" status: interfaces: 2 - deviceID: \"1017\" driver: mlx5_core mtu: 1500 name: ens785f0 pciAddress: \"0000:18:00.0\" totalvfs: 8 vendor: 15b3 - deviceID: \"1017\" driver: mlx5_core mtu: 1500 name: ens785f1 pciAddress: \"0000:18:00.1\" totalvfs: 8 vendor: 15b3 - deviceID: 158b driver: i40e mtu: 1500 name: ens817f0 pciAddress: 0000:81:00.0 totalvfs: 64 vendor: \"8086\" - deviceID: 158b driver: i40e mtu: 1500 name: ens817f1 pciAddress: 0000:81:00.1 totalvfs: 64 vendor: \"8086\" - deviceID: 158b driver: i40e mtu: 1500 name: ens803f0 pciAddress: 0000:86:00.0 totalvfs: 64 vendor: \"8086\" syncStatus: Succeeded",
"mstconfig -d -0001:b1:00.1 set SRIOV_EN=1 NUM_OF_VFS=16 1 2",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: {} configurationMode: daemon disableDrain: false disablePlugins: - mellanox enableInjector: true enableOperatorWebhook: true logLevel: 2",
"oc -n openshift-sriov-network-operator get sriovnetworknodestate.sriovnetwork.openshift.io worker-0 -oyaml",
"- deviceID: 101d driver: mlx5_core eSwitchMode: legacy linkSpeed: -1 Mb/s linkType: ETH mac: 08:c0:eb:96:31:25 mtu: 1500 name: ens3f1np1 pciAddress: 0000:b1:00.1 1 totalvfs: 16 vendor: 15b3",
"Secure Boot: Enabled Secure Boot Policy: Standard Secure Boot Mode: Mode Deployed",
"pfNames: [\"netpf0#2-7\"]",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1 namespace: openshift-sriov-network-operator spec: resourceName: net1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 16 nicSelector: pfNames: [\"netpf0#0-0\"] deviceType: netdevice",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1-dpdk namespace: openshift-sriov-network-operator spec: resourceName: net1dpdk nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 16 nicSelector: pfNames: [\"netpf0#8-15\"] deviceType: vfio-pci",
"ip link show <interface> 1",
"5: ens3f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 3c:fd:fe:d1:bc:01 brd ff:ff:ff:ff:ff:ff vf 0 link/ether 5a:e7:88:25:ea:a0 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 1 link/ether 3e:1d:36:d7:3d:49 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 2 link/ether ce:09:56:97:df:f9 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 3 link/ether 5e:91:cf:88:d1:38 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 4 link/ether e6:06:a1:96:2f:de brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off",
"apiVersion: v1 kind: Pod metadata: name: testpmd-sriov namespace: mynamespace annotations: cpu-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" spec: containers: - name: testpmd command: [\"sleep\", \"99999\"] image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.9 securityContext: capabilities: add: [\"IPC_LOCK\",\"SYS_ADMIN\"] privileged: true runAsUser: 0 resources: requests: memory: 1000Mi hugepages-1Gi: 1Gi cpu: '2' openshift.io/sriov1: 1 limits: hugepages-1Gi: 1Gi cpu: '2' memory: 1000Mi openshift.io/sriov1: 1 volumeMounts: - mountPath: /dev/hugepages name: hugepage readOnly: False runtimeClassName: performance-cnf-performanceprofile 1 volumes: - name: hugepage emptyDir: medium: HugePages",
"apiVersion: v1 kind: Pod metadata: name: testpmd-sriov namespace: mynamespace annotations: k8s.v1.cni.cncf.io/networks: hwoffload1 spec: runtimeClassName: performance-cnf-performanceprofile 1 containers: - name: testpmd command: [\"sleep\", \"99999\"] image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.9 securityContext: capabilities: add: [\"IPC_LOCK\",\"SYS_ADMIN\"] privileged: true runAsUser: 0 resources: requests: memory: 1000Mi hugepages-1Gi: 1Gi cpu: '2' limits: hugepages-1Gi: 1Gi cpu: '2' memory: 1000Mi volumeMounts: - mountPath: /mnt/huge name: hugepage readOnly: False volumes: - name: hugepage emptyDir: medium: HugePages",
"oc create -f <name>-sriov-node-network.yaml",
"oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'",
"apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: <name> 1 spec: containers: - name: sample-container image: <image> 2 command: [\"sleep\", \"infinity\"] resources: limits: memory: \"1Gi\" 3 cpu: \"2\" 4 requests: memory: \"1Gi\" cpu: \"2\"",
"oc create -f <filename> 1",
"oc describe pod sample-pod",
"oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus",
"oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus",
"oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name>",
"\"lastSyncError\": \"write /sys/bus/pci/devices/0000:3b:00.1/sriov_numvfs: cannot allocate memory\"",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: \"<spoof_check>\" 6 ipam: |- 7 {} linkState: <link_state> 8 maxTxRate: <max_tx_rate> 9 minTxRate: <min_tx_rate> 10 vlanQoS: <vlan_qos> 11 trust: \"<trust_vf>\" 12 capabilities: <capabilities> 13",
"cniVersion: operator.openshift.io/v1 kind: Network =metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"whereabouts-dual-stack\", \"cniVersion\": \"0.3.1, \"type\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"ipRanges\": [ {\"range\": \"192.168.10.0/24\"}, {\"range\": \"2001:db8::/64\"} ] } }",
"oc exec -it mypod -- ip a",
"{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #",
"{ \"ipam\": { \"type\": \"dhcp\" } }",
"{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }",
"{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/29\", \"network_name\": \"example_net_common\", 1 } }",
"{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/24\", \"network_name\": \"example_net_common\", 1 } }",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"gateway\": \"10.56.217.1\" }",
"oc create -f <name>.yaml",
"oc get net-attach-def -n <namespace>",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: example-network namespace: additional-sriov-network-1 spec: ipam: | { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"routes\": [{ \"dst\": \"0.0.0.0/0\" }], \"gateway\": \"10.56.217.1\" } vlan: 0 resourceName: intelnics metaPlugins : | { \"type\": \"vrf\", 1 \"vrfname\": \"example-vrf-name\" 2 }",
"oc create -f sriov-network-attachment.yaml",
"oc get network-attachment-definitions -n <namespace> 1",
"NAME AGE additional-sriov-network-1 14m",
"ip vrf show",
"Name Table ----------------------- red 10",
"ip link",
"5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode",
"[ { \"name\": \"<name>\", 1 \"mac\": \"<mac_address>\", 2 \"ips\": [\"<cidr_range>\"] 3 } ]",
"apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"net1\", \"mac\": \"20:04:0f:f1:88:01\", \"ips\": [\"192.168.10.1/24\", \"2001::1/64\"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: [\"sleep\", \"infinity\"]",
"metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1",
"metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"<network>\", 1 \"namespace\": \"<namespace>\", 2 \"default-route\": [\"<default-route>\"] 3 } ]",
"oc create -f <name>.yaml",
"oc get pod <name> -o yaml",
"oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/network-status: |- 1 [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.128.2.14\" ], \"default\": true, \"dns\": {} },{ \"name\": \"macvlan-bridge\", \"interface\": \"net1\", \"ips\": [ \"20.2.2.100\" ], \"mac\": \"22:2f:60:a5:f8:00\", \"dns\": {} }] name: example-pod namespace: default spec: status:",
"oc describe pod example-pod",
"\"mac\": \"20:04:0f:f1:88:01\", \"mtu\": 1500, \"dns\": {}, \"device-info\": { \"type\": \"pci\", \"version\": \"1.1.0\", \"pci\": { \"pci-address\": \"0000:86:01.3\" } }",
"oc exec example-pod -n sriov-tests -- cat /etc/podnetinfo/annotations | grep mtu",
"k8s.v1.cni.cncf.io/network-status=\"[{ \\\"name\\\": \\\"ovn-kubernetes\\\", \\\"interface\\\": \\\"eth0\\\", \\\"ips\\\": [ \\\"10.131.0.67\\\" ], \\\"mac\\\": \\\"0a:58:0a:83:00:43\\\", \\\"default\\\": true, \\\"dns\\\": {} },{ \\\"name\\\": \\\"sriov-tests/sriov-nic-1\\\", \\\"interface\\\": \\\"net1\\\", \\\"ips\\\": [ \\\"192.168.10.1\\\" ], \\\"mac\\\": \\\"20:04:0f:f1:88:01\\\", \\\"mtu\\\": 1500, \\\"dns\\\": {}, \\\"device-info\\\": { \\\"type\\\": \\\"pci\\\", \\\"version\\\": \\\"1.1.0\\\", \\\"pci\\\": { \\\"pci-address\\\": \\\"0000:86:01.3\\\" } } }]\"",
"apiVersion: v1 kind: SriovNetworkPoolConfig metadata: name: pool-1 1 namespace: openshift-sriov-network-operator 2 spec: maxUnavailable: 2 3 nodeSelector: 4 matchLabels: node-role.kubernetes.io/worker: \"\"",
"oc create -f sriov-nw-pool.yaml",
"oc create namespace sriov-test",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice nicSelector: pfNames: [\"ens1\"] nodeSelector: node-role.kubernetes.io/worker: \"\" numVfs: 5 priority: 99 resourceName: sriov_nic_1",
"oc create -f sriov-node-policy.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-nic-1 namespace: openshift-sriov-network-operator spec: linkState: auto networkNamespace: sriov-test resourceName: sriov_nic_1 capabilities: '{ \"mac\": true, \"ips\": true }' ipam: '{ \"type\": \"static\" }'",
"oc create -f sriov-network.yaml",
"oc get sriovNetworkpoolConfig -n openshift-sriov-network-operator",
"NAME AGE pool-1 67s 1",
"oc patch SriovNetworkNodePolicy sriov-nic-1 -n openshift-sriov-network-operator --type merge -p '{\"spec\": {\"numVfs\": 4}}'",
"oc get sriovNetworkNodeState -n openshift-sriov-network-operator",
"NAMESPACE NAME SYNC STATUS DESIRED SYNC STATE CURRENT SYNC STATE AGE openshift-sriov-network-operator worker-0 InProgress Drain_Required DrainComplete 3d10h openshift-sriov-network-operator worker-1 InProgress Drain_Required DrainComplete 3d10h",
"NAMESPACE NAME SYNC STATUS DESIRED SYNC STATE CURRENT SYNC STATE AGE openshift-sriov-network-operator worker-0 Succeeded Idle Idle 3d10h openshift-sriov-network-operator worker-1 Succeeded Idle Idle 3d10h",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <policy_name> namespace: openshift-sriov-network-operator spec: resourceName: sriovnuma0 1 nodeSelector: kubernetes.io/hostname: <node_name> numVfs: <number_of_Vfs> nicSelector: 2 vendor: \"<vendor_ID>\" deviceID: \"<device_ID>\" deviceType: netdevice excludeTopology: true 3",
"oc create -f sriov-network-node-policy.yaml",
"sriovnetworknodepolicy.sriovnetwork.openshift.io/policy-for-numa-0 created",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-numa-0-network 1 namespace: openshift-sriov-network-operator spec: resourceName: sriovnuma0 2 networkNamespace: <namespace> 3 ipam: |- 4 { \"type\": \"<ipam_type>\", }",
"oc create -f sriov-network.yaml",
"sriovnetwork.sriovnetwork.openshift.io/sriov-numa-0-network created",
"apiVersion: v1 kind: Pod metadata: name: <pod_name> annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"sriov-numa-0-network\", 1 } ] spec: containers: - name: <container_name> image: <image> imagePullPolicy: IfNotPresent command: [\"sleep\", \"infinity\"]",
"oc create -f sriov-network-pod.yaml",
"pod/example-pod created",
"oc get pod <pod_name>",
"NAME READY STATUS RESTARTS AGE test-deployment-sriov-76cbbf4756-k9v72 1/1 Running 0 45h",
"oc debug pod/<pod_name>",
"chroot /host",
"lscpu | grep NUMA",
"NUMA node(s): 2 NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18, NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,",
"cat /proc/self/status | grep Cpus",
"Cpus_allowed: aa Cpus_allowed_list: 1,3,5,7",
"cat /sys/class/net/net1/device/numa_node",
"0",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 ipam: |- 5 {} linkState: <link_state> 6 capabilities: <capabilities> 7",
"cniVersion: operator.openshift.io/v1 kind: Network =metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"whereabouts-dual-stack\", \"cniVersion\": \"0.3.1, \"type\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"ipRanges\": [ {\"range\": \"192.168.10.0/24\"}, {\"range\": \"2001:db8::/64\"} ] } }",
"oc exec -it mypod -- ip a",
"{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #",
"{ \"ipam\": { \"type\": \"dhcp\" } }",
"{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }",
"{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/29\", \"network_name\": \"example_net_common\", 1 } }",
"{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/24\", \"network_name\": \"example_net_common\", 1 } }",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"gateway\": \"10.56.217.1\" }",
"oc create -f <name>.yaml",
"oc get net-attach-def -n <namespace>",
"[ { \"name\": \"<network_attachment>\", 1 \"infiniband-guid\": \"<guid>\", 2 \"ips\": [\"<cidr_range>\"] 3 } ]",
"apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"ib1\", \"infiniband-guid\": \"c2:11:22:33:44:55:66:77\", \"ips\": [\"192.168.10.1/24\", \"2001::1/64\"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: [\"sleep\", \"infinity\"]",
"metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1",
"metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"<network>\", 1 \"namespace\": \"<namespace>\", 2 \"default-route\": [\"<default-route>\"] 3 } ]",
"oc create -f <name>.yaml",
"oc get pod <name> -o yaml",
"oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/network-status: |- 1 [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.128.2.14\" ], \"default\": true, \"dns\": {} },{ \"name\": \"macvlan-bridge\", \"interface\": \"net1\", \"ips\": [ \"20.2.2.100\" ], \"mac\": \"22:2f:60:a5:f8:00\", \"dns\": {} }] name: example-pod namespace: default spec: status:",
"oc describe pod example-pod",
"\"mac\": \"20:04:0f:f1:88:01\", \"mtu\": 1500, \"dns\": {}, \"device-info\": { \"type\": \"pci\", \"version\": \"1.1.0\", \"pci\": { \"pci-address\": \"0000:86:01.3\" } }",
"oc exec example-pod -n sriov-tests -- cat /etc/podnetinfo/annotations | grep mtu",
"k8s.v1.cni.cncf.io/network-status=\"[{ \\\"name\\\": \\\"ovn-kubernetes\\\", \\\"interface\\\": \\\"eth0\\\", \\\"ips\\\": [ \\\"10.131.0.67\\\" ], \\\"mac\\\": \\\"0a:58:0a:83:00:43\\\", \\\"default\\\": true, \\\"dns\\\": {} },{ \\\"name\\\": \\\"sriov-tests/sriov-nic-1\\\", \\\"interface\\\": \\\"net1\\\", \\\"ips\\\": [ \\\"192.168.10.1\\\" ], \\\"mac\\\": \\\"20:04:0f:f1:88:01\\\", \\\"mtu\\\": 1500, \\\"dns\\\": {}, \\\"device-info\\\": { \\\"type\\\": \\\"pci\\\", \\\"version\\\": \\\"1.1.0\\\", \\\"pci\\\": { \\\"pci-address\\\": \\\"0000:86:01.3\\\" } } }]\"",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkPoolConfig metadata: name: worker namespace: openshift-sriov-network-operator spec: maxUnavailable: 1 nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" rdmaMode: exclusive 1",
"oc create -f sriov-nw-pool.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nic-pf1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true 1 nicSelector: pfNames: [\"ens3f0np0\"] nodeSelector: node-role.kubernetes.io/worker: \"\" numVfs: 4 priority: 99 resourceName: sriov_nic_pf1",
"oc create -f sriov-node-policy.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-nic-pf1 namespace: openshift-sriov-network-operator spec: networkNamespace: sriov-tests resourceName: sriov_nic_pf1 ipam: |- metaPlugins: | { \"type\": \"rdma\" 1 }",
"oc create -f sriov-network.yaml",
"apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"net1\", \"mac\": \"20:04:0f:f1:88:01\", \"ips\": [\"192.168.10.1/24\", \"2001::1/64\"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: [\"sleep\", \"infinity\"]",
"oc create -f sriov-test-pod.yaml",
"oc rsh testpod1 -n sriov-tests",
"ls /sys/bus/pci/devices/USD{PCIDEVICE_OPENSHIFT_IO_SRIOV_NIC_PF1}/infiniband/*/ports/1/hw_counters/",
"duplicate_request out_of_buffer req_cqe_flush_error resp_cqe_flush_error roce_adp_retrans roce_slow_restart_trans implied_nak_seq_err out_of_sequence req_remote_access_errors resp_local_length_error roce_adp_retrans_to rx_atomic_requests lifespan packet_seq_err req_remote_invalid_request resp_remote_access_errors roce_slow_restart rx_read_requests local_ack_timeout_err req_cqe_error resp_cqe_error rnr_nak_retry_err roce_slow_restart_cnps rx_write_requests",
"oc label node <node_name> feature.node.kubernetes.io/network-sriov.capable=\"true\"",
"oc create namespace sysctl-tuning-test",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policyoneflag 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyoneflag 3 nodeSelector: 4 feature.node.kubernetes.io/network-sriov.capable=\"true\" priority: 10 5 numVfs: 5 6 nicSelector: 7 pfNames: [\"ens5\"] 8 deviceType: \"netdevice\" 9 isRdma: false 10",
"oc create -f policyoneflag-sriov-node-network.yaml",
"oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'",
"Succeeded",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: onevalidflag 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyoneflag 3 networkNamespace: sysctl-tuning-test 4 ipam: '{ \"type\": \"static\" }' 5 capabilities: '{ \"mac\": true, \"ips\": true }' 6 metaPlugins : | 7 { \"type\": \"tuning\", \"capabilities\":{ \"mac\":true }, \"sysctl\":{ \"net.ipv4.conf.IFNAME.accept_redirects\": \"1\" } }",
"oc create -f sriov-network-interface-sysctl.yaml",
"oc get network-attachment-definitions -n <namespace> 1",
"NAME AGE onevalidflag 14m",
"apiVersion: v1 kind: Pod metadata: name: tunepod namespace: sysctl-tuning-test annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"onevalidflag\", 1 \"mac\": \"0a:56:0a:83:04:0c\", 2 \"ips\": [\"10.100.100.200/24\"] 3 } ] spec: containers: - name: podexample image: centos command: [\"/bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault",
"oc apply -f examplepod.yaml",
"oc get pod -n sysctl-tuning-test",
"NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s",
"oc rsh -n sysctl-tuning-test tunepod",
"sysctl net.ipv4.conf.net1.accept_redirects",
"net.ipv4.conf.net1.accept_redirects = 1",
"oc create namespace sysctl-tuning-test",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policyallflags 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyallflags 3 nodeSelector: 4 node.alpha.kubernetes-incubator.io/nfd-network-sriov.capable = `true` priority: 10 5 numVfs: 5 6 nicSelector: 7 pfNames: [\"ens1f0\"] 8 deviceType: \"netdevice\" 9 isRdma: false 10",
"oc create -f policyallflags-sriov-node-network.yaml",
"oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'",
"Succeeded",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: allvalidflags 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyallflags 3 networkNamespace: sysctl-tuning-test 4 capabilities: '{ \"mac\": true, \"ips\": true }' 5",
"oc create -f sriov-network-attachment.yaml",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: bond-sysctl-network namespace: sysctl-tuning-test spec: config: '{ \"cniVersion\":\"0.4.0\", \"name\":\"bound-net\", \"plugins\":[ { \"type\":\"bond\", 1 \"mode\": \"active-backup\", 2 \"failOverMac\": 1, 3 \"linksInContainer\": true, 4 \"miimon\": \"100\", \"links\": [ 5 {\"name\": \"net1\"}, {\"name\": \"net2\"} ], \"ipam\":{ 6 \"type\":\"static\" } }, { \"type\":\"tuning\", 7 \"capabilities\":{ \"mac\":true }, \"sysctl\":{ \"net.ipv4.conf.IFNAME.accept_redirects\": \"0\", \"net.ipv4.conf.IFNAME.accept_source_route\": \"0\", \"net.ipv4.conf.IFNAME.disable_policy\": \"1\", \"net.ipv4.conf.IFNAME.secure_redirects\": \"0\", \"net.ipv4.conf.IFNAME.send_redirects\": \"0\", \"net.ipv6.conf.IFNAME.accept_redirects\": \"0\", \"net.ipv6.conf.IFNAME.accept_source_route\": \"1\", \"net.ipv6.neigh.IFNAME.base_reachable_time_ms\": \"20000\", \"net.ipv6.neigh.IFNAME.retrans_time_ms\": \"2000\" } } ] }'",
"oc create -f sriov-bond-network-interface.yaml",
"oc get network-attachment-definitions -n <namespace> 1",
"NAME AGE bond-sysctl-network 22m allvalidflags 47m",
"apiVersion: v1 kind: Pod metadata: name: tunepod namespace: sysctl-tuning-test annotations: k8s.v1.cni.cncf.io/networks: |- [ {\"name\": \"allvalidflags\"}, 1 {\"name\": \"allvalidflags\"}, { \"name\": \"bond-sysctl-network\", \"interface\": \"bond0\", \"mac\": \"0a:56:0a:83:04:0c\", 2 \"ips\": [\"10.100.100.200/24\"] 3 } ] spec: containers: - name: podexample image: centos command: [\"/bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault",
"oc apply -f examplepod.yaml",
"oc get pod -n sysctl-tuning-test",
"NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s",
"oc rsh -n sysctl-tuning-test tunepod",
"sysctl net.ipv6.neigh.bond0.base_reachable_time_ms",
"net.ipv6.neigh.bond0.base_reachable_time_ms = 20000",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnetpolicy-mlx namespace: openshift-sriov-network-operator spec: deviceType: netdevice nicSelector: deviceID: \"1017\" pfNames: - ens8f0np0#0-9 rootDevices: - 0000:d8:00.0 vendor: \"15b3\" nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 10 priority: 99 resourceName: resourcemlx",
"oc create -f sriovnetpolicy-mlx.yaml",
"oc create namespace enable-allmulti-test",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: enableallmulti 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: enableallmulti 3 networkNamespace: enable-allmulti-test 4 ipam: '{ \"type\": \"static\" }' 5 capabilities: '{ \"mac\": true, \"ips\": true }' 6 trust: \"on\" 7 metaPlugins : | 8 { \"type\": \"tuning\", \"capabilities\":{ \"mac\":true }, \"allmulti\": true } }",
"oc create -f sriov-enable-all-multicast.yaml",
"oc get network-attachment-definitions -n <namespace> 1",
"NAME AGE enableallmulti 14m",
"oc get sriovnetwork -n openshift-sriov-network-operator",
"apiVersion: v1 kind: Pod metadata: name: samplepod namespace: enable-allmulti-test annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"enableallmulti\", 1 \"mac\": \"0a:56:0a:83:04:0c\", 2 \"ips\": [\"10.100.100.200/24\"] 3 } ] spec: containers: - name: podexample image: centos command: [\"/bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault",
"oc apply -f examplepod.yaml",
"oc get pod -n enable-allmulti-test",
"NAME READY STATUS RESTARTS AGE samplepod 1/1 Running 0 47s",
"oc rsh -n enable-allmulti-test samplepod",
"sh-4.4# ip link",
"1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8901 qdisc noqueue state UP mode DEFAULT group default link/ether 0a:58:0a:83:00:10 brd ff:ff:ff:ff:ff:ff link-netnsid 0 1 3: net1@if24: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether ee:9b:66:a4:ec:1d brd ff:ff:ff:ff:ff:ff link-netnsid 0 2",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnetpolicy-810 namespace: openshift-sriov-network-operator spec: deviceType: netdevice nicSelector: pfNames: - ens5f0#0-9 nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numVfs: 10 priority: 99 resourceName: resource810",
"oc create -f sriovnetpolicy-810-sriov-node-network.yaml",
"watch -n 1 'oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath=\"{.status.syncStatus}\"'",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriovnetwork-1ad-810 namespace: openshift-sriov-network-operator spec: ipam: '{}' vlan: 171 1 vlanProto: \"802.1ad\" 2 networkNamespace: default resourceName: resource810",
"oc create -f nad-sriovnetwork-1ad-810.yaml",
"apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: nad-cvlan100 namespace: default spec: config: '{ \"name\": \"vlan-100\", \"cniVersion\": \"0.3.1\", \"type\": \"vlan\", \"linkInContainer\": true, \"master\": \"net1\", 1 \"vlanId\": 100, \"ipam\": {\"type\": \"static\"} }'",
"oc apply -f nad-cvlan100.yaml",
"apiVersion: v1 kind: Pod metadata: name: test-pod annotations: k8s.v1.cni.cncf.io/networks: sriovnetwork-1ad-810, nad-cvlan100 spec: containers: - name: test-container image: quay.io/ocp-edge-qe/cnf-gotests-client:v4.10 imagePullPolicy: Always securityContext: privileged: true",
"oc create -f test-qinq-pod.yaml",
"oc debug node/my-cluster-node -- bash -c \"ip link show ens5f0\"",
"6: ens5f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether b4:96:91:a5:22:10 brd ff:ff:ff:ff:ff:ff vf 0 link/ether a2:81:ba:d0:6f:f3 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 1 link/ether 8a:bb:0a:36:f2:ed brd ff:ff:ff:ff:ff:ff, vlan 171, vlan protocol 802.1ad, spoof checking on, link-state auto, trust off vf 2 link/ether ca:0e:e1:5b:0c:d2 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 3 link/ether ee:6c:e2:f5:2c:70 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 4 link/ether 0a:d6:b7:66:5e:e8 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 5 link/ether da:d5:e7:14:4f:aa brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 6 link/ether d6:8e:85:75:12:5c brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 7 link/ether d6:eb:ce:9c:ea:78 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 8 link/ether 5e:c5:cc:05:93:3c brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust on vf 9 link/ether a6:5a:7c:1c:2a:16 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-example namespace: openshift-sriov-network-operator spec: resourceName: example nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 4 nicSelector: vendor: \"8086\" pfNames: ['ens803f0'] rootDevices: ['0000:86:00.0']",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: net-example namespace: openshift-sriov-network-operator spec: networkNamespace: default ipam: | 1 { \"type\": \"host-local\", 2 \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"routes\": [ {\"dst\": \"224.0.0.0/5\"}, {\"dst\": \"232.0.0.0/5\"} ], \"gateway\": \"10.56.217.1\" } resourceName: example",
"apiVersion: v1 kind: Pod metadata: name: testpmd namespace: default annotations: k8s.v1.cni.cncf.io/networks: nic1 spec: containers: - name: example image: rhel7:latest securityContext: capabilities: add: [\"NET_ADMIN\"] 1 command: [ \"sleep\", \"infinity\"]",
"apiVersion: v1 kind: Pod metadata: name: rdma-app annotations: k8s.v1.cni.cncf.io/networks: sriov-rdma-mlnx spec: containers: - name: testpmd image: <RDMA_image> imagePullPolicy: IfNotPresent securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] command: [\"sleep\", \"infinity\"]",
"apiVersion: v1 kind: Pod metadata: name: dpdk-app annotations: k8s.v1.cni.cncf.io/networks: sriov-dpdk-net spec: containers: - name: testpmd image: <DPDK_image> securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: memory: \"1Gi\" cpu: \"2\" hugepages-1Gi: \"4Gi\" requests: memory: \"1Gi\" cpu: \"2\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: intel-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: intelnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"8086\" deviceID: \"158b\" pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: vfio-pci 1",
"oc create -f intel-dpdk-node-policy.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: intel-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- ... 1 vlan: <vlan> resourceName: intelnics",
"oc create -f intel-dpdk-network.yaml",
"apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: intel-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /mnt/huge 4 name: hugepage resources: limits: openshift.io/intelnics: \"1\" 5 memory: \"1Gi\" cpu: \"4\" 6 hugepages-1Gi: \"4Gi\" 7 requests: openshift.io/intelnics: \"1\" memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages",
"oc create -f intel-dpdk-pod.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"15b3\" deviceID: \"1015\" 1 pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: netdevice 2 isRdma: true 3",
"oc create -f mlx-dpdk-node-policy.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 vlan: <vlan> resourceName: mlxnics",
"oc create -f mlx-dpdk-network.yaml",
"apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /mnt/huge 4 name: hugepage resources: limits: openshift.io/mlxnics: \"1\" 5 memory: \"1Gi\" cpu: \"4\" 6 hugepages-1Gi: \"4Gi\" 7 requests: openshift.io/mlxnics: \"1\" memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages",
"oc create -f mlx-dpdk-pod.yaml",
"apiVersion: v1 kind: Namespace metadata: name: test-namespace labels: pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged security.openshift.io/scc.podSecurityLabelSync: \"false\"",
"oc apply -f test-namespace.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnic namespace: openshift-sriov-network-operator spec: deviceType: netdevice 1 isRdma: true 2 needVhostNet: true 3 nicSelector: vendor: \"15b3\" 4 deviceID: \"101b\" 5 rootDevices: [\"00:05.0\"] numVfs: 10 priority: 99 resourceName: sriovnic nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\"",
"oc create -f sriov-node-network-policy.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-network namespace: openshift-sriov-network-operator spec: networkNamespace: test-namespace resourceName: sriovnic spoofChk: \"off\" trust: \"on\"",
"oc create -f sriov-network-attachment.yaml",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: tap-one namespace: test-namespace 1 spec: config: '{ \"cniVersion\": \"0.4.0\", \"name\": \"tap\", \"plugins\": [ { \"type\": \"tap\", \"multiQueue\": true, \"selinuxcontext\": \"system_u:system_r:container_t:s0\" }, { \"type\":\"tuning\", \"capabilities\":{ \"mac\":true } } ] }'",
"oc apply -f tap-example.yaml",
"apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: test-namespace 1 annotations: k8s.v1.cni.cncf.io/networks: '[ {\"name\": \"sriov-network\", \"namespace\": \"test-namespace\"}, {\"name\": \"tap-one\", \"interface\": \"ext0\", \"namespace\": \"test-namespace\"}]' spec: nodeSelector: kubernetes.io/hostname: \"worker-0\" securityContext: fsGroup: 1001 2 runAsGroup: 1001 3 seccompProfile: type: RuntimeDefault containers: - name: testpmd image: <DPDK_image> 4 securityContext: capabilities: drop: [\"ALL\"] 5 add: 6 - IPC_LOCK - NET_RAW #for mlx only 7 runAsUser: 1001 8 privileged: false 9 allowPrivilegeEscalation: true 10 runAsNonRoot: true 11 volumeMounts: - mountPath: /mnt/huge 12 name: hugepages resources: limits: openshift.io/sriovnic: \"1\" 13 memory: \"1Gi\" cpu: \"4\" 14 hugepages-1Gi: \"4Gi\" 15 requests: openshift.io/sriovnic: \"1\" memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] runtimeClassName: performance-cnf-performanceprofile 16 volumes: - name: hugepages emptyDir: medium: HugePages",
"oc create -f dpdk-pod-rootless.yaml",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: globallyDisableIrqLoadBalancing: true cpu: isolated: 21-51,73-103 1 reserved: 0-20,52-72 2 hugepages: defaultHugepagesSize: 1G 3 pages: - count: 32 size: 1G net: userLevelNetworking: true numa: topologyPolicy: \"single-numa-node\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"",
"oc create -f mlx-dpdk-perfprofile-policy.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci 1 needVhostNet: true 2 nicSelector: pfNames: [\"ens3f0\"] nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numVfs: 10 priority: 99 resourceName: dpdk_nic_1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci needVhostNet: true nicSelector: pfNames: [\"ens3f1\"] nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numVfs: 10 priority: 99 resourceName: dpdk_nic_2",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice 1 isRdma: true 2 nicSelector: rootDevices: - \"0000:5e:00.1\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numVfs: 5 priority: 99 resourceName: dpdk_nic_1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-2 namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: rootDevices: - \"0000:5e:00.0\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numVfs: 5 priority: 99 resourceName: dpdk_nic_2",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-network-1 namespace: openshift-sriov-network-operator spec: ipam: '{\"type\": \"host-local\",\"ranges\": [[{\"subnet\": \"10.0.1.0/24\"}]],\"dataDir\": \"/run/my-orchestrator/container-ipam-state-1\"}' 1 networkNamespace: dpdk-test 2 spoofChk: \"off\" trust: \"on\" resourceName: dpdk_nic_1 3 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-network-2 namespace: openshift-sriov-network-operator spec: ipam: '{\"type\": \"host-local\",\"ranges\": [[{\"subnet\": \"10.0.2.0/24\"}]],\"dataDir\": \"/run/my-orchestrator/container-ipam-state-1\"}' networkNamespace: dpdk-test spoofChk: \"off\" trust: \"on\" resourceName: dpdk_nic_2",
"apiVersion: v1 kind: Namespace metadata: name: dpdk-test --- apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: '[ 1 { \"name\": \"dpdk-network-1\", \"namespace\": \"dpdk-test\" }, { \"name\": \"dpdk-network-2\", \"namespace\": \"dpdk-test\" } ]' irq-load-balancing.crio.io: \"disable\" 2 cpu-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" labels: app: dpdk name: testpmd namespace: dpdk-test spec: runtimeClassName: performance-performance 3 containers: - command: - /bin/bash - -c - sleep INF image: registry.redhat.io/openshift4/dpdk-base-rhel8 imagePullPolicy: Always name: dpdk resources: 4 limits: cpu: \"16\" hugepages-1Gi: 8Gi memory: 2Gi requests: cpu: \"16\" hugepages-1Gi: 8Gi memory: 2Gi securityContext: capabilities: add: - IPC_LOCK - SYS_RESOURCE - NET_RAW - NET_ADMIN runAsUser: 0 volumeMounts: - mountPath: /mnt/huge name: hugepages terminationGracePeriodSeconds: 5 volumes: - emptyDir: medium: HugePages name: hugepages",
"#!/bin/bash set -ex export CPU=USD(cat /sys/fs/cgroup/cpuset/cpuset.cpus) echo USD{CPU} dpdk-testpmd -l USD{CPU} -a USD{PCIDEVICE_OPENSHIFT_IO_DPDK_NIC_1} -a USD{PCIDEVICE_OPENSHIFT_IO_DPDK_NIC_2} -n 4 -- -i --nb-cores=15 --rxd=4096 --txd=4096 --rxq=7 --txq=7 --forward-mode=mac --eth-peer=0,50:00:00:00:00:01 --eth-peer=1,50:00:00:00:00:02",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-rdma-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"15b3\" deviceID: \"1015\" 1 pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: netdevice 2 isRdma: true 3",
"oc create -f mlx-rdma-node-policy.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-rdma-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 vlan: <vlan> resourceName: mlxnics",
"oc create -f mlx-rdma-network.yaml",
"apiVersion: v1 kind: Pod metadata: name: rdma-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-rdma-network spec: containers: - name: testpmd image: <RDMA_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /mnt/huge 4 name: hugepage resources: limits: memory: \"1Gi\" cpu: \"4\" 5 hugepages-1Gi: \"4Gi\" 6 requests: memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages",
"oc create -f mlx-rdma-pod.yaml",
"apiVersion: v1 kind: Pod metadata: name: testpmd-dpdk namespace: mynamespace annotations: cpu-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" spec: containers: - name: testpmd command: [\"sleep\", \"99999\"] image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.9 securityContext: capabilities: add: [\"IPC_LOCK\",\"SYS_ADMIN\"] privileged: true runAsUser: 0 resources: requests: memory: 1000Mi hugepages-1Gi: 1Gi cpu: '2' openshift.io/dpdk1: 1 1 limits: hugepages-1Gi: 1Gi cpu: '2' memory: 1000Mi openshift.io/dpdk1: 1 volumeMounts: - mountPath: /mnt/huge name: hugepage readOnly: False runtimeClassName: performance-cnf-performanceprofile 2 volumes: - name: hugepage emptyDir: medium: HugePages",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: bond-net1 namespace: demo spec: config: '{ \"type\": \"bond\", 1 \"cniVersion\": \"0.3.1\", \"name\": \"bond-net1\", \"mode\": \"active-backup\", 2 \"failOverMac\": 1, 3 \"linksInContainer\": true, 4 \"miimon\": \"100\", \"mtu\": 1500, \"links\": [ 5 {\"name\": \"net1\"}, {\"name\": \"net2\"} ], \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"routes\": [{ \"dst\": \"0.0.0.0/0\" }], \"gateway\": \"10.56.217.1\" } }'",
"apiVersion: v1 kind: Pod metadata: name: bondpod1 namespace: demo annotations: k8s.v1.cni.cncf.io/networks: demo/sriovnet1, demo/sriovnet2, demo/bond-net1 1 spec: containers: - name: podexample image: quay.io/openshift/origin-network-interface-bond-cni:4.11.0 command: [\"/bin/bash\", \"-c\", \"sleep INF\"]",
"oc apply -f podbonding.yaml",
"oc rsh -n demo bondpod1 sh-4.4# sh-4.4# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 3: eth0@if150: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP link/ether 62:b1:b5:c8:fb:7a brd ff:ff:ff:ff:ff:ff inet 10.244.1.122/24 brd 10.244.1.255 scope global eth0 valid_lft forever preferred_lft forever 4: net3: <BROADCAST,MULTICAST,UP,LOWER_UP400> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 9e:23:69:42:fb:8a brd ff:ff:ff:ff:ff:ff 1 inet 10.56.217.66/24 scope global bond0 valid_lft forever preferred_lft forever 43: net1: <BROADCAST,MULTICAST,UP,LOWER_UP800> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 9e:23:69:42:fb:8a brd ff:ff:ff:ff:ff:ff 2 44: net2: <BROADCAST,MULTICAST,UP,LOWER_UP800> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 9e:23:69:42:fb:8a brd ff:ff:ff:ff:ff:ff 3",
"annotations: k8s.v1.cni.cncf.io/networks: demo/sriovnet1, demo/sriovnet2, demo/bond-net1@bond0",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default 1 namespace: openshift-sriov-network-operator spec: enableInjector: true enableOperatorWebhook: true configurationMode: \"systemd\" 2 logLevel: 2",
"oc apply -f sriovOperatorConfig.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: mcp-offloading 1 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,mcp-offloading]} 2 nodeSelector: matchLabels: node-role.kubernetes.io/mcp-offloading: \"\" 3",
"oc create -f mcp-offloading.yaml",
"oc label node worker-2 node-role.kubernetes.io/mcp-offloading=\"\"",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 2d v1.31.3 master-1 Ready master 2d v1.31.3 master-2 Ready master 2d v1.31.3 worker-0 Ready worker 2d v1.31.3 worker-1 Ready worker 2d v1.31.3 worker-2 Ready mcp-offloading,worker 47h v1.31.3 worker-3 Ready mcp-offloading,worker 47h v1.31.3",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkPoolConfig metadata: name: sriovnetworkpoolconfig-offload namespace: openshift-sriov-network-operator spec: ovsHardwareOffloadConfig: name: mcp-offloading 1",
"oc create -f <SriovNetworkPoolConfig_name>.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-node-policy 1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice 2 eSwitchMode: \"switchdev\" 3 nicSelector: deviceID: \"1019\" rootDevices: - 0000:d8:00.0 vendor: \"15b3\" pfNames: - ens8f0 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 6 priority: 5 resourceName: mlxnics",
"oc create -f sriov-node-policy.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USD{name} namespace: openshift-sriov-network-operator spec: deviceType: switchdev isRdma: true nicSelector: netFilter: openstack/NetworkID:USD{net_id} nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: USD{name}",
"oc label node <node-name> network.operator.openshift.io/smart-nic=",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-node-mgmt-vf-policy namespace: openshift-sriov-network-operator spec: deviceType: netdevice eSwitchMode: \"switchdev\" nicSelector: deviceID: \"1019\" rootDevices: - 0000:d8:00.0 vendor: \"15b3\" pfNames: - ens8f0#0-0 1 nodeSelector: network.operator.openshift.io/smart-nic: \"\" numVfs: 6 2 priority: 5 resourceName: mgmtvf",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-node-policy namespace: openshift-sriov-network-operator spec: deviceType: netdevice eSwitchMode: \"switchdev\" nicSelector: deviceID: \"1019\" rootDevices: - 0000:d8:00.0 vendor: \"15b3\" pfNames: - ens8f0#1-5 1 nodeSelector: network.operator.openshift.io/smart-nic: \"\" numVfs: 6 2 priority: 5 resourceName: mlxnics",
"oc create -f sriov-node-policy.yaml",
"oc create -f sriov-node-mgmt-vf-policy.yaml",
"apiVersion: v1 kind: ConfigMap metadata: name: hardware-offload-config namespace: openshift-network-operator data: mgmt-port-resource-name: openshift.io/mgmtvf",
"oc create -f hardware-offload-config.yaml",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: net-attach-def 1 namespace: net-attach-def 2 annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/mlxnics 3 spec: config: '{\"cniVersion\":\"0.3.1\",\"name\":\"ovn-kubernetes\",\"type\":\"ovn-k8s-cni-overlay\",\"ipam\":{},\"dns\":{}}'",
"oc create -f net-attach-def.yaml",
"oc get net-attach-def -A",
"NAMESPACE NAME AGE net-attach-def net-attach-def 43h",
". metadata: annotations: v1.multus-cni.io/default-network: net-attach-def/net-attach-def 1",
"oc label node <example_node_name_one> node-role.kubernetes.io/sriov=",
"oc label node <example_node_name_two> node-role.kubernetes.io/sriov=",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: sriov spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,sriov]} nodeSelector: matchLabels: node-role.kubernetes.io/sriov: \"\"",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: sriov name: 99-bf2-dpu spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,ZmluZF9jb250YWluZXIoKSB7CiAgY3JpY3RsIHBzIC1vIGpzb24gfCBqcSAtciAnLmNvbnRhaW5lcnNbXSB8IHNlbGVjdCgubWV0YWRhdGEubmFtZT09InNyaW92LW5ldHdvcmstY29uZmlnLWRhZW1vbiIpIHwgLmlkJwp9CnVudGlsIG91dHB1dD0kKGZpbmRfY29udGFpbmVyKTsgW1sgLW4gIiRvdXRwdXQiIF1dOyBkbwogIGVjaG8gIndhaXRpbmcgZm9yIGNvbnRhaW5lciB0byBjb21lIHVwIgogIHNsZWVwIDE7CmRvbmUKISBzdWRvIGNyaWN0bCBleGVjICRvdXRwdXQgL2JpbmRhdGEvc2NyaXB0cy9iZjItc3dpdGNoLW1vZGUuc2ggIiRAIgo= mode: 0755 overwrite: true path: /etc/default/switch_in_sriov_config_daemon.sh systemd: units: - name: dpu-switch.service enabled: true contents: | [Unit] Description=Switch BlueField2 card to NIC/DPU mode RequiresMountsFor=%t/containers Wants=network.target After=network-online.target kubelet.service [Service] SuccessExitStatus=0 120 RemainAfterExit=True ExecStart=/bin/bash -c '/etc/default/switch_in_sriov_config_daemon.sh nic || shutdown -r now' 1 Type=oneshot [Install] WantedBy=multi-user.target",
"I1006 16:09:50.985852 60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1 I1006 16:09:50.985923 60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4 F1006 16:09:50.985939 60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4",
"I0512 19:07:17.589083 108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1 F0512 19:07:17.589141 108432 ovnkube.go:133] failed to get default gateway interface",
"oc get all,ep,cm -n openshift-ovn-kubernetes",
"Warning: apps.openshift.io/v1 DeploymentConfig is deprecated in v4.14+, unavailable in v4.10000+ NAME READY STATUS RESTARTS AGE pod/ovnkube-control-plane-65c6f55656-6d55h 2/2 Running 0 114m pod/ovnkube-control-plane-65c6f55656-fd7vw 2/2 Running 2 (104m ago) 114m pod/ovnkube-node-bcvts 8/8 Running 0 113m pod/ovnkube-node-drgvv 8/8 Running 0 113m pod/ovnkube-node-f2pxt 8/8 Running 0 113m pod/ovnkube-node-frqsb 8/8 Running 0 105m pod/ovnkube-node-lbxkk 8/8 Running 0 105m pod/ovnkube-node-tt7bx 8/8 Running 1 (102m ago) 105m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ovn-kubernetes-control-plane ClusterIP None <none> 9108/TCP 114m service/ovn-kubernetes-node ClusterIP None <none> 9103/TCP,9105/TCP 114m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/ovnkube-node 6 6 6 6 6 beta.kubernetes.io/os=linux 114m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/ovnkube-control-plane 3/3 3 3 114m NAME DESIRED CURRENT READY AGE replicaset.apps/ovnkube-control-plane-65c6f55656 3 3 3 114m NAME ENDPOINTS AGE endpoints/ovn-kubernetes-control-plane 10.0.0.3:9108,10.0.0.4:9108,10.0.0.5:9108 114m endpoints/ovn-kubernetes-node 10.0.0.3:9105,10.0.0.4:9105,10.0.0.5:9105 + 9 more... 114m NAME DATA AGE configmap/control-plane-status 1 113m configmap/kube-root-ca.crt 1 114m configmap/openshift-service-ca.crt 1 114m configmap/ovn-ca 1 114m configmap/ovnkube-config 1 114m configmap/signer-ca 1 114m",
"oc get pods ovnkube-node-bcvts -o jsonpath='{.spec.containers[*].name}' -n openshift-ovn-kubernetes",
"ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller",
"oc get pods ovnkube-control-plane-65c6f55656-6d55h -o jsonpath='{.spec.containers[*].name}' -n openshift-ovn-kubernetes",
"kube-rbac-proxy ovnkube-cluster-manager",
"oc get po -n openshift-ovn-kubernetes",
"NAME READY STATUS RESTARTS AGE ovnkube-control-plane-8444dff7f9-4lh9k 2/2 Running 0 27m ovnkube-control-plane-8444dff7f9-5rjh9 2/2 Running 0 27m ovnkube-node-55xs2 8/8 Running 0 26m ovnkube-node-7r84r 8/8 Running 0 16m ovnkube-node-bqq8p 8/8 Running 0 17m ovnkube-node-mkj4f 8/8 Running 0 26m ovnkube-node-mlr8k 8/8 Running 0 26m ovnkube-node-wqn2m 8/8 Running 0 16m",
"oc get pods -n openshift-ovn-kubernetes -owide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-control-plane-8444dff7f9-4lh9k 2/2 Running 0 27m 10.0.0.3 ci-ln-t487nnb-72292-mdcnq-master-1 <none> <none> ovnkube-control-plane-8444dff7f9-5rjh9 2/2 Running 0 27m 10.0.0.4 ci-ln-t487nnb-72292-mdcnq-master-2 <none> <none> ovnkube-node-55xs2 8/8 Running 0 26m 10.0.0.4 ci-ln-t487nnb-72292-mdcnq-master-2 <none> <none> ovnkube-node-7r84r 8/8 Running 0 17m 10.0.128.3 ci-ln-t487nnb-72292-mdcnq-worker-b-wbz7z <none> <none> ovnkube-node-bqq8p 8/8 Running 0 17m 10.0.128.2 ci-ln-t487nnb-72292-mdcnq-worker-a-lh7ms <none> <none> ovnkube-node-mkj4f 8/8 Running 0 27m 10.0.0.5 ci-ln-t487nnb-72292-mdcnq-master-0 <none> <none> ovnkube-node-mlr8k 8/8 Running 0 27m 10.0.0.3 ci-ln-t487nnb-72292-mdcnq-master-1 <none> <none> ovnkube-node-wqn2m 8/8 Running 0 17m 10.0.128.4 ci-ln-t487nnb-72292-mdcnq-worker-c-przlm <none> <none>",
"oc rsh -c nbdb -n openshift-ovn-kubernetes ovnkube-node-55xs2",
"ovn-nbctl show",
"oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 -c northd -- ovn-nbctl lr-list",
"45339f4f-7d0b-41d0-b5f9-9fca9ce40ce6 (GR_ci-ln-t487nnb-72292-mdcnq-master-2) 96a0a0f0-e7ed-4fec-8393-3195563de1b8 (ovn_cluster_router)",
"oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 -c nbdb -- ovn-nbctl ls-list",
"bdd7dc3d-d848-4a74-b293-cc15128ea614 (ci-ln-t487nnb-72292-mdcnq-master-2) b349292d-ee03-4914-935f-1940b6cb91e5 (ext_ci-ln-t487nnb-72292-mdcnq-master-2) 0aac0754-ea32-4e33-b086-35eeabf0a140 (join) 992509d7-2c3f-4432-88db-c179e43592e5 (transit_switch)",
"oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 -c nbdb -- ovn-nbctl lb-list",
"UUID LB PROTO VIP IPs 7c84c673-ed2a-4436-9a1f-9bc5dd181eea Service_default/ tcp 172.30.0.1:443 10.0.0.3:6443,169.254.169.2:6443,10.0.0.5:6443 4d663fd9-ddc8-4271-b333-4c0e279e20bb Service_default/ tcp 172.30.0.1:443 10.0.0.3:6443,10.0.0.4:6443,10.0.0.5:6443 292eb07f-b82f-4962-868a-4f541d250bca Service_openshif tcp 172.30.105.247:443 10.129.0.12:8443 034b5a7f-bb6a-45e9-8e6d-573a82dc5ee3 Service_openshif tcp 172.30.192.38:443 10.0.0.3:10259,10.0.0.4:10259,10.0.0.5:10259 a68bb53e-be84-48df-bd38-bdd82fcd4026 Service_openshif tcp 172.30.161.125:8443 10.129.0.32:8443 6cc21b3d-2c54-4c94-8ff5-d8e017269c2e Service_openshif tcp 172.30.3.144:443 10.129.0.22:8443 37996ffd-7268-4862-a27f-61cd62e09c32 Service_openshif tcp 172.30.181.107:443 10.129.0.18:8443 81d4da3c-f811-411f-ae0c-bc6713d0861d Service_openshif tcp 172.30.228.23:443 10.129.0.29:8443 ac5a4f3b-b6ba-4ceb-82d0-d84f2c41306e Service_openshif tcp 172.30.14.240:9443 10.129.0.36:9443 c88979fb-1ef5-414b-90ac-43b579351ac9 Service_openshif tcp 172.30.231.192:9001 10.128.0.5:9001,10.128.2.5:9001,10.129.0.5:9001,10.129.2.4:9001,10.130.0.3:9001,10.131.0.3:9001 fcb0a3fb-4a77-4230-a84a-be45dce757e8 Service_openshif tcp 172.30.189.92:443 10.130.0.17:8440 67ef3e7b-ceb9-4bf0-8d96-b43bde4c9151 Service_openshif tcp 172.30.67.218:443 10.129.0.9:8443 d0032fba-7d5e-424a-af25-4ab9b5d46e81 Service_openshif tcp 172.30.102.137:2379 10.0.0.3:2379,10.0.0.4:2379,10.0.0.5:2379 tcp 172.30.102.137:9979 10.0.0.3:9979,10.0.0.4:9979,10.0.0.5:9979 7361c537-3eec-4e6c-bc0c-0522d182abd4 Service_openshif tcp 172.30.198.215:9001 10.0.0.3:9001,10.0.0.4:9001,10.0.0.5:9001,10.0.128.2:9001,10.0.128.3:9001,10.0.128.4:9001 0296c437-1259-410b-a6fd-81c310ad0af5 Service_openshif tcp 172.30.198.215:9001 10.0.0.3:9001,169.254.169.2:9001,10.0.0.5:9001,10.0.128.2:9001,10.0.128.3:9001,10.0.128.4:9001 5d5679f5-45b8-479d-9f7c-08b123c688b8 Service_openshif tcp 172.30.38.253:17698 10.128.0.52:17698,10.129.0.84:17698,10.130.0.60:17698 2adcbab4-d1c9-447d-9573-b5dc9f2efbfa Service_openshif tcp 172.30.148.52:443 10.0.0.4:9202,10.0.0.5:9202 tcp 172.30.148.52:444 10.0.0.4:9203,10.0.0.5:9203 tcp 172.30.148.52:445 10.0.0.4:9204,10.0.0.5:9204 tcp 172.30.148.52:446 10.0.0.4:9205,10.0.0.5:9205 2a33a6d7-af1b-4892-87cc-326a380b809b Service_openshif tcp 172.30.67.219:9091 10.129.2.16:9091,10.131.0.16:9091 tcp 172.30.67.219:9092 10.129.2.16:9092,10.131.0.16:9092 tcp 172.30.67.219:9093 10.129.2.16:9093,10.131.0.16:9093 tcp 172.30.67.219:9094 10.129.2.16:9094,10.131.0.16:9094 f56f59d7-231a-4974-99b3-792e2741ec8d Service_openshif tcp 172.30.89.212:443 10.128.0.41:8443,10.129.0.68:8443,10.130.0.44:8443 08c2c6d7-d217-4b96-b5d8-c80c4e258116 Service_openshif tcp 172.30.102.137:2379 10.0.0.3:2379,169.254.169.2:2379,10.0.0.5:2379 tcp 172.30.102.137:9979 10.0.0.3:9979,169.254.169.2:9979,10.0.0.5:9979 60a69c56-fc6a-4de6-bd88-3f2af5ba5665 Service_openshif tcp 172.30.10.193:443 10.129.0.25:8443 ab1ef694-0826-4671-a22c-565fc2d282ec Service_openshif tcp 172.30.196.123:443 10.128.0.33:8443,10.129.0.64:8443,10.130.0.37:8443 b1fb34d3-0944-4770-9ee3-2683e7a630e2 Service_openshif tcp 172.30.158.93:8443 10.129.0.13:8443 95811c11-56e2-4877-be1e-c78ccb3a82a9 Service_openshif tcp 172.30.46.85:9001 10.130.0.16:9001 4baba1d1-b873-4535-884c-3f6fc07a50fd Service_openshif tcp 172.30.28.87:443 10.129.0.26:8443 6c2e1c90-f0ca-484e-8a8e-40e71442110a Service_openshif udp 172.30.0.10:53 10.128.0.13:5353,10.128.2.6:5353,10.129.0.39:5353,10.129.2.6:5353,10.130.0.11:5353,10.131.0.9:5353",
"oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 -c nbdb ovn-nbctl --help",
"oc get po -n openshift-ovn-kubernetes",
"NAME READY STATUS RESTARTS AGE ovnkube-control-plane-8444dff7f9-4lh9k 2/2 Running 0 27m ovnkube-control-plane-8444dff7f9-5rjh9 2/2 Running 0 27m ovnkube-node-55xs2 8/8 Running 0 26m ovnkube-node-7r84r 8/8 Running 0 16m ovnkube-node-bqq8p 8/8 Running 0 17m ovnkube-node-mkj4f 8/8 Running 0 26m ovnkube-node-mlr8k 8/8 Running 0 26m ovnkube-node-wqn2m 8/8 Running 0 16m",
"oc get pods -n openshift-ovn-kubernetes -owide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-control-plane-8444dff7f9-4lh9k 2/2 Running 0 27m 10.0.0.3 ci-ln-t487nnb-72292-mdcnq-master-1 <none> <none> ovnkube-control-plane-8444dff7f9-5rjh9 2/2 Running 0 27m 10.0.0.4 ci-ln-t487nnb-72292-mdcnq-master-2 <none> <none> ovnkube-node-55xs2 8/8 Running 0 26m 10.0.0.4 ci-ln-t487nnb-72292-mdcnq-master-2 <none> <none> ovnkube-node-7r84r 8/8 Running 0 17m 10.0.128.3 ci-ln-t487nnb-72292-mdcnq-worker-b-wbz7z <none> <none> ovnkube-node-bqq8p 8/8 Running 0 17m 10.0.128.2 ci-ln-t487nnb-72292-mdcnq-worker-a-lh7ms <none> <none> ovnkube-node-mkj4f 8/8 Running 0 27m 10.0.0.5 ci-ln-t487nnb-72292-mdcnq-master-0 <none> <none> ovnkube-node-mlr8k 8/8 Running 0 27m 10.0.0.3 ci-ln-t487nnb-72292-mdcnq-master-1 <none> <none> ovnkube-node-wqn2m 8/8 Running 0 17m 10.0.128.4 ci-ln-t487nnb-72292-mdcnq-worker-c-przlm <none> <none>",
"oc rsh -c sbdb -n openshift-ovn-kubernetes ovnkube-node-55xs2",
"ovn-sbctl show",
"Chassis \"5db31703-35e9-413b-8cdf-69e7eecb41f7\" hostname: ci-ln-9gp362t-72292-v2p94-worker-a-8bmwz Encap geneve ip: \"10.0.128.4\" options: {csum=\"true\"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-worker-a-8bmwz Chassis \"070debed-99b7-4bce-b17d-17e720b7f8bc\" hostname: ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Encap geneve ip: \"10.0.128.2\" options: {csum=\"true\"} Port_Binding k8s-ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding rtoe-GR_ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding openshift-monitoring_alertmanager-main-1 Port_Binding rtoj-GR_ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding etor-GR_ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding cr-rtos-ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding openshift-e2e-loki_loki-promtail-qcrcz Port_Binding jtor-GR_ci-ln-9gp362t-72292-v2p94-worker-b-svmp6 Port_Binding openshift-multus_network-metrics-daemon-mkd4t Port_Binding openshift-ingress-canary_ingress-canary-xtvj4 Port_Binding openshift-ingress_router-default-6c76cbc498-pvlqk Port_Binding openshift-dns_dns-default-zz582 Port_Binding openshift-monitoring_thanos-querier-57585899f5-lbf4f Port_Binding openshift-network-diagnostics_network-check-target-tn228 Port_Binding openshift-monitoring_prometheus-k8s-0 Port_Binding openshift-image-registry_image-registry-68899bd877-xqxjj Chassis \"179ba069-0af1-401c-b044-e5ba90f60fea\" hostname: ci-ln-9gp362t-72292-v2p94-master-0 Encap geneve ip: \"10.0.0.5\" options: {csum=\"true\"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-master-0 Chassis \"68c954f2-5a76-47be-9e84-1cb13bd9dab9\" hostname: ci-ln-9gp362t-72292-v2p94-worker-c-mjf9w Encap geneve ip: \"10.0.128.3\" options: {csum=\"true\"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-worker-c-mjf9w Chassis \"2de65d9e-9abf-4b6e-a51d-a1e038b4d8af\" hostname: ci-ln-9gp362t-72292-v2p94-master-2 Encap geneve ip: \"10.0.0.4\" options: {csum=\"true\"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-master-2 Chassis \"1d371cb8-5e21-44fd-9025-c4b162cc4247\" hostname: ci-ln-9gp362t-72292-v2p94-master-1 Encap geneve ip: \"10.0.0.3\" options: {csum=\"true\"} Port_Binding tstor-ci-ln-9gp362t-72292-v2p94-master-1",
"oc exec -n openshift-ovn-kubernetes -it ovnkube-node-55xs2 -c sbdb ovn-sbctl --help",
"git clone [email protected]:openshift/network-tools.git",
"cd network-tools",
"./debug-scripts/network-tools -h",
"./debug-scripts/network-tools ovn-db-run-command ovn-nbctl lr-list",
"944a7b53-7948-4ad2-a494-82b55eeccf87 (GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99) 84bd4a4c-4b0b-4a47-b0cf-a2c32709fc53 (ovn_cluster_router)",
"./debug-scripts/network-tools ovn-db-run-command ovn-sbctl find Port_Binding type=localnet",
"_uuid : d05298f5-805b-4838-9224-1211afc2f199 additional_chassis : [] additional_encap : [] chassis : [] datapath : f3c2c959-743b-4037-854d-26627902597c encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : br-ex_ci-ln-54932yb-72292-kd676-worker-c-rzj99 mac : [unknown] mirror_rules : [] nat_addresses : [] options : {network_name=physnet} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 2 type : localnet up : false virtual_parent : [] [...]",
"./debug-scripts/network-tools ovn-db-run-command ovn-sbctl find Port_Binding type=l3gateway",
"_uuid : 5207a1f3-1cf3-42f1-83e9-387bbb06b03c additional_chassis : [] additional_encap : [] chassis : ca6eb600-3a10-4372-a83e-e0d957c4cd92 datapath : f3c2c959-743b-4037-854d-26627902597c encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : etor-GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99 mac : [\"42:01:0a:00:80:04\"] mirror_rules : [] nat_addresses : [\"42:01:0a:00:80:04 10.0.128.4\"] options : {l3gateway-chassis=\"84737c36-b383-4c83-92c5-2bd5b3c7e772\", peer=rtoe-GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 1 type : l3gateway up : true virtual_parent : [] _uuid : 6088d647-84f2-43f2-b53f-c9d379042679 additional_chassis : [] additional_encap : [] chassis : ca6eb600-3a10-4372-a83e-e0d957c4cd92 datapath : dc9cea00-d94a-41b8-bdb0-89d42d13aa2e encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : jtor-GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99 mac : [router] mirror_rules : [] nat_addresses : [] options : {l3gateway-chassis=\"84737c36-b383-4c83-92c5-2bd5b3c7e772\", peer=rtoj-GR_ci-ln-54932yb-72292-kd676-worker-c-rzj99} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 2 type : l3gateway up : true virtual_parent : [] [...]",
"./debug-scripts/network-tools ovn-db-run-command ovn-sbctl find Port_Binding type=patch",
"_uuid : 785fb8b6-ee5a-4792-a415-5b1cb855dac2 additional_chassis : [] additional_encap : [] chassis : [] datapath : f1ddd1cc-dc0d-43b4-90ca-12651305acec encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : stor-ci-ln-54932yb-72292-kd676-worker-c-rzj99 mac : [router] mirror_rules : [] nat_addresses : [\"0a:58:0a:80:02:01 10.128.2.1 is_chassis_resident(\\\"cr-rtos-ci-ln-54932yb-72292-kd676-worker-c-rzj99\\\")\"] options : {peer=rtos-ci-ln-54932yb-72292-kd676-worker-c-rzj99} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 1 type : patch up : false virtual_parent : [] _uuid : c01ff587-21a5-40b4-8244-4cd0425e5d9a additional_chassis : [] additional_encap : [] chassis : [] datapath : f6795586-bf92-4f84-9222-efe4ac6a7734 encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : rtoj-ovn_cluster_router mac : [\"0a:58:64:40:00:01 100.64.0.1/16\"] mirror_rules : [] nat_addresses : [] options : {peer=jtor-ovn_cluster_router} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 1 type : patch up : false virtual_parent : [] [...]",
"oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node -o json | jq '.items[0].spec.containers[] | .name,.readinessProbe'",
"oc get events -n openshift-ovn-kubernetes",
"oc describe pod ovnkube-node-9lqfk -n openshift-ovn-kubernetes",
"oc get co/network -o json | jq '.status.conditions[]'",
"for p in USD(oc get pods --selector app=ovnkube-node -n openshift-ovn-kubernetes -o jsonpath='{range.items[*]}{\" \"}{.metadata.name}'); do echo === USDp ===; get pods -n openshift-ovn-kubernetes USDp -o json | jq '.status.containerStatuses[] | .name, .ready'; done",
"ALERT_MANAGER=USD(oc get route alertmanager-main -n openshift-monitoring -o jsonpath='{@.spec.host}')",
"curl -s -k -H \"Authorization: Bearer USD(oc create token prometheus-k8s -n openshift-monitoring)\" https://USDALERT_MANAGER/api/v1/alerts | jq '.data[] | \"\\(.labels.severity) \\(.labels.alertname) \\(.labels.pod) \\(.labels.container) \\(.labels.endpoint) \\(.labels.instance)\"'",
"oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -s 'http://localhost:9090/api/v1/rules' | jq '.data.groups[].rules[] | select(((.name|contains(\"ovn\")) or (.name|contains(\"OVN\")) or (.name|contains(\"Ovn\")) or (.name|contains(\"North\")) or (.name|contains(\"South\"))) and .type==\"alerting\")'",
"oc logs -f <pod_name> -c <container_name> -n <namespace>",
"oc logs ovnkube-node-5dx44 -n openshift-ovn-kubernetes",
"oc logs -f ovnkube-node-5dx44 -c ovnkube-controller -n openshift-ovn-kubernetes",
"for p in USD(oc get pods --selector app=ovnkube-node -n openshift-ovn-kubernetes -o jsonpath='{range.items[*]}{\" \"}{.metadata.name}'); do echo === USDp ===; for container in USD(oc get pods -n openshift-ovn-kubernetes USDp -o json | jq -r '.status.containerStatuses[] | .name');do echo ---USDcontainer---; logs -c USDcontainer USDp -n openshift-ovn-kubernetes --tail=5; done; done",
"oc logs -l app=ovnkube-node -n openshift-ovn-kubernetes --all-containers --tail 5",
"oc get po -o wide -n openshift-ovn-kubernetes",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-control-plane-65497d4548-9ptdr 2/2 Running 2 (128m ago) 147m 10.0.0.3 ci-ln-3njdr9b-72292-5nwkp-master-0 <none> <none> ovnkube-control-plane-65497d4548-j6zfk 2/2 Running 0 147m 10.0.0.5 ci-ln-3njdr9b-72292-5nwkp-master-2 <none> <none> ovnkube-node-5dx44 8/8 Running 0 146m 10.0.0.3 ci-ln-3njdr9b-72292-5nwkp-master-0 <none> <none> ovnkube-node-dpfn4 8/8 Running 0 146m 10.0.0.4 ci-ln-3njdr9b-72292-5nwkp-master-1 <none> <none> ovnkube-node-kwc9l 8/8 Running 0 134m 10.0.128.2 ci-ln-3njdr9b-72292-5nwkp-worker-a-2fjcj <none> <none> ovnkube-node-mcrhl 8/8 Running 0 134m 10.0.128.4 ci-ln-3njdr9b-72292-5nwkp-worker-c-v9x5v <none> <none> ovnkube-node-nsct4 8/8 Running 0 146m 10.0.0.5 ci-ln-3njdr9b-72292-5nwkp-master-2 <none> <none> ovnkube-node-zrj9f 8/8 Running 0 134m 10.0.128.3 ci-ln-3njdr9b-72292-5nwkp-worker-b-v78h7 <none> <none>",
"kind: ConfigMap apiVersion: v1 metadata: name: env-overrides namespace: openshift-ovn-kubernetes data: ci-ln-3njdr9b-72292-5nwkp-master-0: | 1 # This sets the log level for the ovn-kubernetes node process: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for ovn-controller: OVN_LOG_LEVEL=dbg ci-ln-3njdr9b-72292-5nwkp-master-2: | # This sets the log level for the ovn-kubernetes node process: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for ovn-controller: OVN_LOG_LEVEL=dbg _master: | 2 # This sets the log level for the ovn-kubernetes master process as well as the ovn-dbchecker: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for northd, nbdb and sbdb on all masters: OVN_LOG_LEVEL=dbg",
"oc apply -n openshift-ovn-kubernetes -f env-overrides.yaml",
"configmap/env-overrides.yaml created",
"oc delete pod -n openshift-ovn-kubernetes --field-selector spec.nodeName=ci-ln-3njdr9b-72292-5nwkp-master-0 -l app=ovnkube-node",
"oc delete pod -n openshift-ovn-kubernetes --field-selector spec.nodeName=ci-ln-3njdr9b-72292-5nwkp-master-2 -l app=ovnkube-node",
"oc delete pod -n openshift-ovn-kubernetes -l app=ovnkube-node",
"oc logs -n openshift-ovn-kubernetes --all-containers --prefix ovnkube-node-<xxxx> | grep -E -m 10 '(Logging config:|vconsole|DBG)'",
"[pod/ovnkube-node-2cpjc/sbdb] + exec /usr/share/ovn/scripts/ovn-ctl --no-monitor '--ovn-sb-log=-vconsole:info -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' run_sb_ovsdb [pod/ovnkube-node-2cpjc/ovnkube-controller] I1012 14:39:59.984506 35767 config.go:2247] Logging config: {File: CNIFile:/var/log/ovn-kubernetes/ovn-k8s-cni-overlay.log LibovsdbFile:/var/log/ovnkube/libovsdb.log Level:5 LogFileMaxSize:100 LogFileMaxBackups:5 LogFileMaxAge:0 ACLLoggingRateLimit:20} [pod/ovnkube-node-2cpjc/northd] + exec ovn-northd --no-chdir -vconsole:info -vfile:off '-vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' --pidfile /var/run/ovn/ovn-northd.pid --n-threads=1 [pod/ovnkube-node-2cpjc/nbdb] + exec /usr/share/ovn/scripts/ovn-ctl --no-monitor '--ovn-nb-log=-vconsole:info -vfile:off -vPATTERN:console:%D{%Y-%m-%dT%H:%M:%S.###Z}|%05N|%c%T|%p|%m' run_nb_ovsdb [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.552Z|00002|hmap|DBG|lib/shash.c:114: 1 bucket with 6+ nodes, including 1 bucket with 6 nodes (32 nodes total across 32 buckets) [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00003|hmap|DBG|lib/shash.c:114: 1 bucket with 6+ nodes, including 1 bucket with 6 nodes (64 nodes total across 64 buckets) [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00004|hmap|DBG|lib/shash.c:114: 1 bucket with 6+ nodes, including 1 bucket with 7 nodes (32 nodes total across 32 buckets) [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00005|reconnect|DBG|unix:/var/run/openvswitch/db.sock: entering BACKOFF [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00007|reconnect|DBG|unix:/var/run/openvswitch/db.sock: entering CONNECTING [pod/ovnkube-node-2cpjc/ovn-controller] 2023-10-12T14:39:54.553Z|00008|ovsdb_cs|DBG|unix:/var/run/openvswitch/db.sock: SERVER_SCHEMA_REQUESTED -> SERVER_SCHEMA_REQUESTED at lib/ovsdb-cs.c:423",
"for f in USD(oc -n openshift-ovn-kubernetes get po -l 'app=ovnkube-node' --no-headers -o custom-columns=N:.metadata.name) ; do echo \"---- USDf ----\" ; oc -n openshift-ovn-kubernetes exec -c ovnkube-controller USDf -- pgrep -a -f init-ovnkube-controller | grep -P -o '^.*loglevel\\s+\\d' ; done",
"---- ovnkube-node-2dt57 ---- 60981 /usr/bin/ovnkube --init-ovnkube-controller xpst8-worker-c-vmh5n.c.openshift-qe.internal --init-node xpst8-worker-c-vmh5n.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 ---- ovnkube-node-4zznh ---- 178034 /usr/bin/ovnkube --init-ovnkube-controller xpst8-master-2.c.openshift-qe.internal --init-node xpst8-master-2.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 ---- ovnkube-node-548sx ---- 77499 /usr/bin/ovnkube --init-ovnkube-controller xpst8-worker-a-fjtnb.c.openshift-qe.internal --init-node xpst8-worker-a-fjtnb.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 ---- ovnkube-node-6btrf ---- 73781 /usr/bin/ovnkube --init-ovnkube-controller xpst8-worker-b-p8rww.c.openshift-qe.internal --init-node xpst8-worker-b-p8rww.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4 ---- ovnkube-node-fkc9r ---- 130707 /usr/bin/ovnkube --init-ovnkube-controller xpst8-master-0.c.openshift-qe.internal --init-node xpst8-master-0.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 5 ---- ovnkube-node-tk9l4 ---- 181328 /usr/bin/ovnkube --init-ovnkube-controller xpst8-master-1.c.openshift-qe.internal --init-node xpst8-master-1.c.openshift-qe.internal --config-file=/run/ovnkube-config/ovnkube.conf --ovn-empty-lb-events --loglevel 4",
"oc get podnetworkconnectivitychecks -n openshift-network-diagnostics",
"oc get podnetworkconnectivitychecks -n openshift-network-diagnostics -o json | jq '.items[]| .spec.targetEndpoint,.status.successes[0]'",
"oc get podnetworkconnectivitychecks -n openshift-network-diagnostics -o json | jq '.items[]| .spec.targetEndpoint,.status.failures[0]'",
"oc get podnetworkconnectivitychecks -n openshift-network-diagnostics -o json | jq '.items[]| .spec.targetEndpoint,.status.outages[0]'",
"oc exec prometheus-k8s-0 -n openshift-monitoring -- promtool query instant http://localhost:9090 '{component=\"openshift-network-diagnostics\"}'",
"oc exec prometheus-k8s-0 -n openshift-monitoring -- promtool query instant http://localhost:9090 '{component=\"openshift-network-diagnostics\"}'",
"oc patch --type=merge --patch '{\"spec\": {\"featureSet\": \"TechPreviewNoUpgrade\"}}' featuregate/cluster",
"featuregate.config.openshift.io/cluster patched",
"oc get featuregate cluster -o yaml",
"featureGates: enabled: - name: OVNObservability",
"oc get pods -n <namespace> -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES destination-pod 1/1 Running 0 53s 10.131.0.23 ci-ln-1gqp7b2-72292-bb9dv-worker-a-gtmpc <none> <none> source-pod 1/1 Running 0 56s 10.131.0.22 ci-ln-1gqp7b2-72292-bb9dv-worker-a-gtmpc <none> <none>",
"oc get pods -n openshift-ovn-kubernetes -o wide",
"NAME ... READY STATUS RESTARTS AGE IP NODE NOMINATED NODE ovnkube-node-jzn5b 8/8 Running 1 (34m ago) 37m 10.0.128.2 ci-ln-1gqp7b2-72292-bb9dv-worker-a-gtmpc <none>",
"oc exec -it <pod_name> -n openshift-ovn-kubernetes -- bash",
"/usr/bin/ovnkube-observ -add-ovs-collector",
"2024/12/02 19:41:41.327584 OVN-K message: Allowed by default allow from local node policy, direction ingress 2024/12/02 19:41:41.327593 src=10.131.0.2, dst=10.131.0.6 2024/12/02 19:41:41.327692 OVN-K message: Allowed by default allow from local node policy, direction ingress 2024/12/02 19:41:41.327715 src=10.131.0.6, dst=10.131.0.2",
"/usr/bin/ovnkube-observ -add-ovs-collector -filter-src-ip <pod_ip_address>",
"Found group packets, id 14 2024/12/10 16:27:12.456473 OVN-K message: Allowed by admin network policy allow-egress-group1, direction Egress 2024/12/10 16:27:12.456570 src=10.131.0.22, dst=10.131.0.23 2024/12/10 16:27:14.484421 OVN-K message: Allowed by admin network policy allow-egress-group1, direction Egress 2024/12/10 16:27:14.484428 src=10.131.0.22, dst=10.131.0.23 2024/12/10 16:27:12.457222 OVN-K message: Allowed by network policy test:allow-ingress-from-specific-pod, direction Ingress 2024/12/10 16:27:12.457228 src=10.131.0.22, dst=10.131.0.23 2024/12/10 16:27:12.457288 OVN-K message: Allowed by network policy test:allow-ingress-from-specific-pod, direction Ingress 2024/12/10 16:27:12.457299 src=10.131.0.22, dst=10.131.0.23",
"/usr/bin/ovnkube-observ <flag>",
"POD=USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-control-plane -o name | head -1 | awk -F '/' '{print USDNF}')",
"oc cp -n openshift-ovn-kubernetes USDPOD:/usr/bin/ovnkube-trace -c ovnkube-cluster-manager ovnkube-trace",
"chmod +x ovnkube-trace",
"./ovnkube-trace -help",
"Usage of ./ovnkube-trace: -addr-family string Address family (ip4 or ip6) to be used for tracing (default \"ip4\") -dst string dest: destination pod name -dst-ip string destination IP address (meant for tests to external targets) -dst-namespace string k8s namespace of dest pod (default \"default\") -dst-port string dst-port: destination port (default \"80\") -kubeconfig string absolute path to the kubeconfig file -loglevel string loglevel: klog level (default \"0\") -ovn-config-namespace string namespace used by ovn-config itself -service string service: destination service name -skip-detrace skip ovn-detrace command -src string src: source pod name -src-namespace string k8s namespace of source pod (default \"default\") -tcp use tcp transport protocol -udp use udp transport protocol",
"oc run web --namespace=default --image=quay.io/openshifttest/nginx --labels=\"app=web\" --expose --port=80",
"get pods -n openshift-dns",
"NAME READY STATUS RESTARTS AGE dns-default-8s42x 2/2 Running 0 5h8m dns-default-mdw6r 2/2 Running 0 4h58m dns-default-p8t5h 2/2 Running 0 4h58m dns-default-rl6nk 2/2 Running 0 5h8m dns-default-xbgqx 2/2 Running 0 5h8m dns-default-zv8f6 2/2 Running 0 4h58m node-resolver-62jjb 1/1 Running 0 5h8m node-resolver-8z4cj 1/1 Running 0 4h59m node-resolver-bq244 1/1 Running 0 5h8m node-resolver-hc58n 1/1 Running 0 4h59m node-resolver-lm6z4 1/1 Running 0 5h8m node-resolver-zfx5k 1/1 Running 0 5h",
"./ovnkube-trace -src-namespace default \\ 1 -src web \\ 2 -dst-namespace openshift-dns \\ 3 -dst dns-default-p8t5h \\ 4 -udp -dst-port 53 \\ 5 -loglevel 0 6",
"ovn-trace source pod to destination pod indicates success from web to dns-default-p8t5h ovn-trace destination pod to source pod indicates success from dns-default-p8t5h to web ovs-appctl ofproto/trace source pod to destination pod indicates success from web to dns-default-p8t5h ovs-appctl ofproto/trace destination pod to source pod indicates success from dns-default-p8t5h to web ovn-detrace source pod to destination pod indicates success from web to dns-default-p8t5h ovn-detrace destination pod to source pod indicates success from dns-default-p8t5h to web",
"ovn-trace source pod to destination pod indicates success from web to dns-default-8s42x ovn-trace (remote) source pod to destination pod indicates success from web to dns-default-8s42x ovn-trace destination pod to source pod indicates success from dns-default-8s42x to web ovn-trace (remote) destination pod to source pod indicates success from dns-default-8s42x to web ovs-appctl ofproto/trace source pod to destination pod indicates success from web to dns-default-8s42x ovs-appctl ofproto/trace destination pod to source pod indicates success from dns-default-8s42x to web ovn-detrace source pod to destination pod indicates success from web to dns-default-8s42x ovn-detrace destination pod to source pod indicates success from dns-default-8s42x to web",
"./ovnkube-trace -src-namespace default -src web -dst-namespace openshift-dns -dst dns-default-467qw -udp -dst-port 53 -loglevel 2",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: default spec: podSelector: {} ingress: []",
"oc apply -f deny-by-default.yaml",
"networkpolicy.networking.k8s.io/deny-by-default created",
"oc run web --namespace=default --image=quay.io/openshifttest/nginx --labels=\"app=web\" --expose --port=80",
"oc create namespace prod",
"oc label namespace/prod purpose=production",
"oc run test-6459 --namespace=prod --rm -i -t --image=alpine -- sh",
"./ovnkube-trace -src-namespace prod -src test-6459 -dst-namespace default -dst web -tcp -dst-port 80 -loglevel 0",
"ovn-trace source pod to destination pod indicates failure from test-6459 to web",
"./ovnkube-trace -src-namespace prod -src test-6459 -dst-namespace default -dst web -tcp -dst-port 80 -loglevel 2",
"------------------------------------------------ 3. ls_out_acl_hint (northd.c:7454): !ct.new && ct.est && !ct.rpl && ct_mark.blocked == 0, priority 4, uuid 12efc456 reg0[8] = 1; reg0[10] = 1; next; 5. ls_out_acl_action (northd.c:7835): reg8[30..31] == 0, priority 500, uuid 69372c5d reg8[30..31] = 1; next(4); 5. ls_out_acl_action (northd.c:7835): reg8[30..31] == 1, priority 500, uuid 2fa0af89 reg8[30..31] = 2; next(4); 4. ls_out_acl_eval (northd.c:7691): reg8[30..31] == 2 && reg0[10] == 1 && (outport == @a16982411286042166782_ingressDefaultDeny), priority 2000, uuid 447d0dab reg8[17] = 1; ct_commit { ct_mark.blocked = 1; }; 1 next;",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-prod namespace: default spec: podSelector: matchLabels: app: web policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production",
"oc apply -f web-allow-prod.yaml",
"./ovnkube-trace -src-namespace prod -src test-6459 -dst-namespace default -dst web -tcp -dst-port 80 -loglevel 0",
"ovn-trace source pod to destination pod indicates success from test-6459 to web ovn-trace destination pod to source pod indicates success from web to test-6459 ovs-appctl ofproto/trace source pod to destination pod indicates success from test-6459 to web ovs-appctl ofproto/trace destination pod to source pod indicates success from web to test-6459 ovn-detrace source pod to destination pod indicates success from test-6459 to web ovn-detrace destination pod to source pod indicates success from web to test-6459",
"wget -qO- --timeout=2 http://web.default",
"<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>",
"- op: add path: /spec/platformSpec/baremetal/machineNetworks/- 1 value: 192.168.1.0/24 #",
"- op: add path: /spec/clusterNetwork/- value: 1 cidr: fd01::/48 hostPrefix: 64 - op: add path: /spec/serviceNetwork/- value: fd02::/112 2",
"oc patch network.config.openshift.io cluster \\ 1 --type='json' --patch-file <file>.yaml",
"network.config.openshift.io/cluster patched",
"- op: add path: /spec/platformSpec/baremetal/machineNetworks/- 1 value: fd2e:6f44:5dd8::/64 - op: add path: /spec/platformSpec/baremetal/apiServerInternalIPs/- 2 value: fd2e:6f44:5dd8::4 - op: add path: /spec/platformSpec/baremetal/ingressIPs/- value: fd2e:6f44:5dd8::5",
"oc patch infrastructure cluster \\ 1 --type='json' --patch-file <file>.yaml",
"infrastructure/cluster patched",
"oc describe network",
"Status: Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Cidr: fd01::/48 Host Prefix: 64 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 172.30.0.0/16 fd02::/112",
"oc describe infrastructure",
"spec: platformSpec: baremetal: apiServerInternalIPs: - 192.168.123.5 - fd2e:6f44:5dd8::4 ingressIPs: - 192.168.123.10 - fd2e:6f44:5dd8::5 status: platformStatus: baremetal: apiServerInternalIP: 192.168.123.5 apiServerInternalIPs: - 192.168.123.5 - fd2e:6f44:5dd8::4 ingressIP: 192.168.123.10 ingressIPs: - 192.168.123.10 - fd2e:6f44:5dd8::5",
"oc edit networks.config.openshift.io",
"oc patch network.operator.openshift.io cluster --type='merge' -p='{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\": {\"ipv4\":{\"internalJoinSubnet\": \"<join_subnet>\"}, \"ipv6\":{\"internalJoinSubnet\": \"<join_subnet>\"}}}}}'",
"network.operator.openshift.io/cluster patched",
"oc get network.operator.openshift.io -o jsonpath=\"{.items[0].spec.defaultNetwork}\"",
"{ \"ovnKubernetesConfig\": { \"ipv4\": { \"internalJoinSubnet\": \"100.64.1.0/16\" }, }, \"type\": \"OVNKubernetes\" }",
"oc patch networks.operator.openshift.io cluster --type=merge -p '{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\":{\"gatewayConfig\":{\"ipv4\":{\"internalMasqueradeSubnet\": \"<ipv4_masquerade_subnet>\"},\"ipv6\":{\"internalMasqueradeSubnet\": \"<ipv6_masquerade_subnet>\"}}}}}}'",
"oc patch networks.operator.openshift.io cluster --type=merge -p '{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\":{\"gatewayConfig\":{\"ipv4\":{\"internalMasqueradeSubnet\": \"<ipv4_masquerade_subnet>\"}}}}}}'",
"oc patch network.operator.openshift.io cluster --type='merge' -p='{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\": {\"ipv4\":{\"internalTransitSwitchSubnet\": \"<transit_subnet>\"}, \"ipv6\":{\"internalTransitSwitchSubnet\": \"<transit_subnet>\"}}}}}'",
"network.operator.openshift.io/cluster patched",
"oc get network.operator.openshift.io -o jsonpath=\"{.items[0].spec.defaultNetwork}\"",
"{ \"ovnKubernetesConfig\": { \"ipv4\": { \"internalTransitSwitchSubnet\": \"100.88.1.0/16\" }, }, \"type\": \"OVNKubernetes\" }",
"from: namespaceSelector: matchLabels: kubernetes.io/metadata.name: novxlan-externalgw-ecmp-4059",
"apiVersion: k8s.ovn.org/v1 kind: AdminPolicyBasedExternalRoute metadata: name: default-route-policy spec: from: namespaceSelector: matchLabels: kubernetes.io/metadata.name: novxlan-externalgw-ecmp-4059 nextHops: static: - ip: \"172.18.0.8\" - ip: \"172.18.0.9\"",
"apiVersion: k8s.ovn.org/v1 kind: AdminPolicyBasedExternalRoute metadata: name: shadow-traffic-policy spec: from: namespaceSelector: matchLabels: externalTraffic: \"\" nextHops: dynamic: - podSelector: matchLabels: gatewayPod: \"\" namespaceSelector: matchLabels: shadowTraffic: \"\" networkAttachmentName: shadow-gateway - podSelector: matchLabels: gigabyteGW: \"\" namespaceSelector: matchLabels: gatewayNamespace: \"\" networkAttachmentName: gateway",
"apiVersion: k8s.ovn.org/v1 kind: AdminPolicyBasedExternalRoute metadata: name: multi-hop-policy spec: from: namespaceSelector: matchLabels: trafficType: \"egress\" nextHops: static: - ip: \"172.18.0.8\" - ip: \"172.18.0.9\" dynamic: - podSelector: matchLabels: gatewayPod: \"\" namespaceSelector: matchLabels: egressTraffic: \"\" networkAttachmentName: gigabyte",
"oc create -f <file>.yaml",
"adminpolicybasedexternalroute.k8s.ovn.org/default-route-policy created",
"oc describe apbexternalroute <name> | tail -n 6",
"Status: Last Transition Time: 2023-04-24T15:09:01Z Messages: Configured external gateway IPs: 172.18.0.8 Status: Success Events: <none>",
"IP capacity = public cloud default capacity - sum(current IP assignments)",
"cloud.network.openshift.io/egress-ipconfig: [ { \"interface\":\"eni-078d267045138e436\", \"ifaddr\":{\"ipv4\":\"10.0.128.0/18\"}, \"capacity\":{\"ipv4\":14,\"ipv6\":15} } ]",
"cloud.network.openshift.io/egress-ipconfig: [ { \"interface\":\"nic0\", \"ifaddr\":{\"ipv4\":\"10.0.128.0/18\"}, \"capacity\":{\"ip\":14} } ]",
"ip -details link show",
"spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: ovnKubernetesConfig: gatewayConfig: ipForwarding: Global",
"apiVersion: v1 kind: Namespace metadata: name: namespace1 labels: env: prod --- apiVersion: v1 kind: Namespace metadata: name: namespace2 labels: env: prod",
"apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egressips-prod spec: egressIPs: - 192.168.126.10 - 192.168.126.102 namespaceSelector: matchLabels: env: prod status: items: - node: node1 egressIP: 192.168.126.10 - node: node3 egressIP: 192.168.126.102",
"apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: <name> 1 spec: egressIPs: 2 - <ip_address> namespaceSelector: 3 podSelector: 4",
"namespaceSelector: 1 matchLabels: <label_name>: <label_value>",
"podSelector: 1 matchLabels: <label_name>: <label_value>",
"apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group1 spec: egressIPs: - 192.168.126.11 - 192.168.126.102 podSelector: matchLabels: app: web namespaceSelector: matchLabels: env: prod",
"apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group2 spec: egressIPs: - 192.168.127.30 - 192.168.127.40 namespaceSelector: matchExpressions: - key: environment operator: NotIn values: - development",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: ovnKubernetesConfig: egressIPConfig: 1 reachabilityTotalTimeoutSeconds: 5 2 gatewayConfig: routingViaHost: false genevePort: 6081",
"oc label nodes <node_name> k8s.ovn.org/egress-assignable=\"\" 1",
"apiVersion: v1 kind: Node metadata: labels: k8s.ovn.org/egress-assignable: \"\" name: <node_name>",
"apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-project1 spec: egressIPs: - 192.168.127.10 - 192.168.127.11 namespaceSelector: matchLabels: env: qa",
"oc apply -f <egressips_name>.yaml 1",
"egressips.k8s.ovn.org/<egressips_name> created",
"oc label ns <namespace> env=qa 1",
"oc get egressip -o yaml",
"spec: egressIPs: - 192.168.127.10 - 192.168.127.11",
"apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: <egress_service_name> 1 namespace: <namespace> 2 spec: sourceIPBy: <egress_traffic_ip> 3 nodeSelector: 4 matchLabels: node-role.kubernetes.io/<role>: \"\" network: <egress_traffic_network> 5",
"apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: test-egress-service namespace: test-namespace spec: sourceIPBy: \"LoadBalancerIP\" nodeSelector: matchLabels: vrf: \"true\" network: \"2\"",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: example-pool namespace: metallb-system spec: addresses: - 172.19.0.100/32",
"oc apply -f ip-addr-pool.yaml",
"apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace annotations: metallb.io/address-pool: example-pool 1 spec: selector: app: example ports: - name: http protocol: TCP port: 8080 targetPort: 8080 type: LoadBalancer --- apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: example-service namespace: example-namespace spec: sourceIPBy: \"LoadBalancerIP\" 2 nodeSelector: 3 matchLabels: node-role.kubernetes.io/worker: \"\"",
"oc apply -f service-egress-service.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: example-bgp-adv namespace: metallb-system spec: ipAddressPools: - example-pool nodeSelectors: - matchLabels: egress-service.k8s.ovn.org/example-namespace-example-service: \"\" 1",
"curl <external_ip_address>:<port_number> 1",
"curl <router_service_IP> <port>",
"openstack port set --allowed-address ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid>",
"apiVersion: v1 kind: Service metadata: name: app-egress spec: ports: - name: tcp-8080 protocol: TCP port: 8080 - name: tcp-8443 protocol: TCP port: 8443 - name: udp-80 protocol: UDP port: 80 type: ClusterIP selector: app: egress-router-cni",
"apiVersion: network.operator.openshift.io/v1 kind: EgressRouter metadata: name: <egress_router_name> namespace: <namespace> 1 spec: addresses: [ 2 { ip: \"<egress_router>\", 3 gateway: \"<egress_gateway>\" 4 } ] mode: Redirect redirect: { redirectRules: [ 5 { destinationIP: \"<egress_destination>\", port: <egress_router_port>, targetPort: <target_port>, 6 protocol: <network_protocol> 7 }, ], fallbackIP: \"<egress_destination>\" 8 }",
"apiVersion: network.operator.openshift.io/v1 kind: EgressRouter metadata: name: egress-router-redirect spec: networkInterface: { macvlan: { mode: \"Bridge\" } } addresses: [ { ip: \"192.168.12.99/24\", gateway: \"192.168.12.1\" } ] mode: Redirect redirect: { redirectRules: [ { destinationIP: \"10.0.0.99\", port: 80, protocol: UDP }, { destinationIP: \"203.0.113.26\", port: 8080, targetPort: 80, protocol: TCP }, { destinationIP: \"203.0.113.27\", port: 8443, targetPort: 443, protocol: TCP } ] }",
"apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: web-app protocol: TCP port: 8080 type: ClusterIP selector: app: egress-router-cni 1",
"oc get network-attachment-definition egress-router-cni-nad",
"NAME AGE egress-router-cni-nad 18m",
"oc get deployment egress-router-cni-deployment",
"NAME READY UP-TO-DATE AVAILABLE AGE egress-router-cni-deployment 1/1 1 1 18m",
"oc get pods -l app=egress-router-cni",
"NAME READY STATUS RESTARTS AGE egress-router-cni-deployment-575465c75c-qkq6m 1/1 Running 0 18m",
"POD_NODENAME=USD(oc get pod -l app=egress-router-cni -o jsonpath=\"{.items[0].spec.nodeName}\")",
"oc debug node/USDPOD_NODENAME",
"chroot /host",
"cat /tmp/egress-router-log",
"2021-04-26T12:27:20Z [debug] Called CNI ADD 2021-04-26T12:27:20Z [debug] Gateway: 192.168.12.1 2021-04-26T12:27:20Z [debug] IP Source Addresses: [192.168.12.99/24] 2021-04-26T12:27:20Z [debug] IP Destinations: [80 UDP 10.0.0.99/30 8080 TCP 203.0.113.26/30 80 8443 TCP 203.0.113.27/30 443] 2021-04-26T12:27:20Z [debug] Created macvlan interface 2021-04-26T12:27:20Z [debug] Renamed macvlan to \"net1\" 2021-04-26T12:27:20Z [debug] Adding route to gateway 192.168.12.1 on macvlan interface 2021-04-26T12:27:20Z [debug] deleted default route {Ifindex: 3 Dst: <nil> Src: <nil> Gw: 10.128.10.1 Flags: [] Table: 254} 2021-04-26T12:27:20Z [debug] Added new default route with gateway 192.168.12.1 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p UDP --dport 80 -j DNAT --to-destination 10.0.0.99 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p TCP --dport 8080 -j DNAT --to-destination 203.0.113.26:80 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p TCP --dport 8443 -j DNAT --to-destination 203.0.113.27:443 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat -o net1 -j SNAT --to-source 192.168.12.99",
"crictl ps --name egress-router-cni-pod | awk '{print USD1}'",
"CONTAINER bac9fae69ddb6",
"crictl inspect -o yaml bac9fae69ddb6 | grep 'pid:' | awk '{print USD2}'",
"68857",
"nsenter -n -t 68857",
"ip route",
"default via 192.168.12.1 dev net1 10.128.10.0/23 dev eth0 proto kernel scope link src 10.128.10.18 192.168.12.0/24 dev net1 proto kernel scope link src 192.168.12.99 192.168.12.1 dev net1",
"oc annotate namespace <namespace> k8s.ovn.org/multicast-enabled=true",
"apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: \"true\"",
"oc project <project>",
"cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi9 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat hostname && sleep inf\"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF",
"cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi9 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat && sleep inf\"] EOF",
"POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}')",
"oc exec mlistener -i -t -- socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname",
"CIDR=USD(oc get Network.config.openshift.io cluster -o jsonpath='{.status.clusterNetwork[0].cidr}')",
"oc exec msender -i -t -- /bin/bash -c \"echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64\"",
"mlistener",
"oc annotate namespace <namespace> \\ 1 k8s.ovn.org/multicast-enabled-",
"apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: null",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: exportNetworkFlows: netFlow: collectors: - 192.168.1.99:2056",
"spec: exportNetworkFlows: netFlow: collectors: - 192.168.1.99:2056",
"oc patch network.operator cluster --type merge -p \"USD(cat <file_name>.yaml)\"",
"network.operator.openshift.io/cluster patched",
"oc get network.operator cluster -o jsonpath=\"{.spec.exportNetworkFlows}\"",
"{\"netFlow\":{\"collectors\":[\"192.168.1.99:2056\"]}}",
"for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node -o jsonpath='{[email protected][*]}{.metadata.name}{\"\\n\"}{end}'); do ; echo; echo USDpod; oc -n openshift-ovn-kubernetes exec -c ovnkube-controller USDpod -- bash -c 'for type in ipfix sflow netflow ; do ovs-vsctl find USDtype ; done'; done",
"ovnkube-node-xrn4p _uuid : a4d2aaca-5023-4f3d-9400-7275f92611f9 active_timeout : 60 add_id_to_interface : false engine_id : [] engine_type : [] external_ids : {} targets : [\"192.168.1.99:2056\"] ovnkube-node-z4vq9 _uuid : 61d02fdb-9228-4993-8ff5-b27f01a29bd6 active_timeout : 60 add_id_to_interface : false engine_id : [] engine_type : [] external_ids : {} targets : [\"192.168.1.99:2056\"]-",
"oc patch network.operator cluster --type='json' -p='[{\"op\":\"remove\", \"path\":\"/spec/exportNetworkFlows\"}]'",
"network.operator.openshift.io/cluster patched",
"oc patch networks.operator.openshift.io cluster --type=merge -p '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"hybridOverlayConfig\":{ \"hybridClusterNetwork\":[ { \"cidr\": \"<cidr>\", \"hostPrefix\": <prefix> } ], \"hybridOverlayVXLANPort\": <overlay_port> } } } } }'",
"network.operator.openshift.io/cluster patched",
"oc get network.operator.openshift.io -o jsonpath=\"{.items[0].spec.defaultNetwork.ovnKubernetesConfig}\"",
"oc new-project hello-openshift",
"oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json",
"oc expose pod/hello-openshift",
"oc expose svc hello-openshift",
"oc get routes -o yaml <name of resource> 1",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-openshift spec: host: www.example.com 1 port: targetPort: 8080 2 to: kind: Service name: hello-openshift",
"oc get ingresses.config/cluster -o jsonpath={.spec.domain}",
"oc new-project hello-openshift",
"oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json",
"oc expose pod/hello-openshift",
"apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded 1 name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift 2 tls: termination: edge to: kind: Service name: hello-openshift",
"oc -n hello-openshift create -f hello-openshift-route.yaml",
"oc -n hello-openshift get routes/hello-openshift-edge -o yaml",
"apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift tls: termination: edge to: kind: Service name: hello-openshift status: ingress: - host: hello-openshift.<apps-sharded.basedomain.example.net> 1 routerCanonicalHostname: router-sharded.<apps-sharded.basedomain.example.net> 2 routerName: sharded 3",
"oc annotate route <route_name> --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1",
"oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s",
"oc annotate route <route_name> -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header=max-age=31536000; includeSubDomains;preload\"",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload 1 2 3 spec: host: def.abc.com tls: termination: \"reencrypt\" wildcardPolicy: \"Subdomain\"",
"oc annotate route <route_name> -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=0\"",
"metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=0",
"oc annotate route --all -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=0\"",
"oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations \"haproxy.router.openshift.io/hsts_header\"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{\"\\n\"}}{{else}}{{\"\"}}{{end}}{{end}}{{end}}'",
"Name: routename HSTS: max-age=0",
"oc edit ingresses.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: 'hello-openshift-default.apps.username.devcluster.openshift.com' requiredHSTSPolicies: 1 - domainPatterns: 2 - '*hello-openshift-default.apps.username.devcluster.openshift.com' - '*hello-openshift-default2.apps.username.devcluster.openshift.com' namespaceSelector: 3 matchLabels: myPolicy: strict maxAge: 4 smallestMaxAge: 1 largestMaxAge: 31536000 preloadPolicy: RequirePreload 5 includeSubDomainsPolicy: RequireIncludeSubDomains 6 - domainPatterns: - 'abc.example.com' - '*xyz.example.com' namespaceSelector: matchLabels: {} maxAge: {} preloadPolicy: NoOpinion includeSubDomainsPolicy: RequireNoIncludeSubDomains",
"oc annotate route --all --all-namespaces --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000\"",
"oc annotate route --all -n my-namespace --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000\"",
"oc get clusteroperator/ingress -n openshift-ingress-operator -o jsonpath='{range .spec.requiredHSTSPolicies[*]}{.spec.requiredHSTSPolicies.maxAgePolicy.largestMaxAge}{\"\\n\"}{end}'",
"oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations \"haproxy.router.openshift.io/hsts_header\"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{\"\\n\"}}{{else}}{{\"\"}}{{end}}{{end}}{{end}}'",
"Name: <_routename_> HSTS: max-age=31536000;preload;includeSubDomains",
"tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2> 1",
"tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789",
"oc annotate route <route_name> router.openshift.io/cookie_name=\"<cookie_name>\"",
"oc annotate route my_route router.openshift.io/cookie_name=\"my_cookie\"",
"ROUTE_NAME=USD(oc get route <route_name> -o jsonpath='{.spec.host}')",
"curl USDROUTE_NAME -k -c /tmp/cookie_jar",
"curl USDROUTE_NAME -k -b /tmp/cookie_jar",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-unsecured spec: host: www.example.com path: \"/test\" 1 to: kind: Service name: service-name",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: httpHeaders: actions: response: - name: X-Frame-Options action: type: Set set: value: DENY",
"apiVersion: route.openshift.io/v1 kind: Route spec: httpHeaders: actions: response: - name: X-Frame-Options action: type: Set set: value: SAMEORIGIN",
"frontend public http-response set-header X-Frame-Options 'DENY' frontend fe_sni http-response set-header X-Frame-Options 'DENY' frontend fe_no_sni http-response set-header X-Frame-Options 'DENY' backend be_secure:openshift-monitoring:alertmanager-main http-response set-header X-Frame-Options 'SAMEORIGIN'",
"apiVersion: route.openshift.io/v1 kind: Route spec: host: app.example.com tls: termination: edge to: kind: Service name: app-example httpHeaders: actions: 1 response: 2 - name: Content-Location 3 action: type: Set 4 set: value: /lang/en-us 5",
"oc -n app-example create -f app-example-route.yaml",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 5500ms 1",
"metadata: annotations: haproxy.router.openshift.io/ip_allowlist: 192.168.1.10",
"metadata: annotations: haproxy.router.openshift.io/ip_allowlist: 192.168.1.10 192.168.1.11 192.168.1.12",
"metadata: annotations: haproxy.router.openshift.io/ip_allowlist: 192.168.1.0/24",
"metadata: annotations: haproxy.router.openshift.io/ip_allowlist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/rewrite-target: / 1",
"oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{\"spec\":{\"routeAdmission\":{\"namespaceOwnership\":\"InterNamespaceAllowed\"}}}' --type=merge",
"spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: \"reencrypt\" 1 route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 2 spec: rules: - host: www.example.com 3 http: paths: - backend: service: name: frontend port: number: 443 path: / pathType: Prefix tls: - hosts: - www.example.com secretName: example-com-tls-certificate",
"spec: rules: - host: www.example.com http: paths: - path: '' pathType: ImplementationSpecific backend: service: name: frontend port: number: 443",
"oc apply -f ingress.yaml",
"oc get routes",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD frontend-gnztq www.example.com frontend 443 reencrypt/Redirect None",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-gnztq ownerReferences: - apiVersion: networking.k8s.io/v1 controller: true kind: Ingress name: frontend uid: 4e6c59cc-704d-4f44-b390-617d879033b6 spec: host: www.example.com path: / port: targetPort: https tls: certificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- insecureEdgeTerminationPolicy: Redirect key: | -----BEGIN RSA PRIVATE KEY----- [...] -----END RSA PRIVATE KEY----- termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- to: kind: Service name: frontend",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend spec: rules: tls: - {} 1",
"oc create -f example-ingress.yaml",
"oc get routes -o yaml",
"apiVersion: v1 items: - apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-j9sdd 1 spec: tls: 2 insecureEdgeTerminationPolicy: Redirect termination: edge 3",
"oc create secret generic dest-ca-cert --from-file=tls.crt=<file_path>",
"oc -n test-ns create secret generic dest-ca-cert --from-file=tls.crt=tls.crt",
"secret/dest-ca-cert created",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: \"reencrypt\" route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 1",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend annotations: route.openshift.io/termination: reencrypt route.openshift.io/destination-ca-certificate-secret: secret-ca-cert spec: tls: insecureEdgeTerminationPolicy: Redirect termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"apiVersion: v1 kind: Service metadata: creationTimestamp: yyyy-mm-ddT00:00:00Z labels: name: <service_name> manager: kubectl-create operation: Update time: yyyy-mm-ddT00:00:00Z name: <service_name> namespace: <namespace_name> resourceVersion: \"<resource_version_number>\" selfLink: \"/api/v1/namespaces/<namespace_name>/services/<service_name>\" uid: <uid_number> spec: clusterIP: 172.30.0.0/16 clusterIPs: 1 - 172.30.0.0/16 - <second_IP_address> ipFamilies: 2 - IPv4 - IPv6 ipFamilyPolicy: RequireDualStack 3 ports: - port: 8080 protocol: TCP targetport: 8080 selector: name: <namespace_name> sessionAffinity: None type: ClusterIP status: loadbalancer: {}",
"oc get endpoints",
"oc get endpointslices",
"openssl rsa -in password_protected_tls.key -out tls.key",
"oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: reencrypt key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- destinationCACertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"openssl rsa -in password_protected_tls.key -out tls.key",
"oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: edge key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"oc create route passthrough route-passthrough-secured --service=frontend --port=8080",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: www.example.com port: targetPort: 8080 tls: termination: passthrough 2 insecureEdgeTerminationPolicy: None 3 to: kind: Service name: frontend",
"oc create role secret-reader --verb=get,list,watch --resource=secrets --resource-name=<secret-name> \\ 1 --namespace=<current-namespace> 2",
"oc create rolebinding secret-reader-binding --role=secret-reader --serviceaccount=openshift-ingress:router --namespace=<current-namespace> 1",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: myedge namespace: test spec: host: myedge-test.apps.example.com tls: externalCertificate: name: <secret-name> 1 termination: edge [...] [...]",
"oc apply -f <route.yaml> 1",
"apiVersion: v1 kind: Service metadata: name: http-service spec: clusterIP: 172.30.163.110 externalIPs: - 192.168.132.253 externalTrafficPolicy: Cluster ports: - name: highport nodePort: 31903 port: 30102 protocol: TCP targetPort: 30102 selector: app: web sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 192.168.132.253",
"{ \"policy\": { \"allowedCIDRs\": [], \"rejectedCIDRs\": [] } }",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: {}",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: allowedCIDRs: - 172.16.66.10/23 rejectedCIDRs: - 172.16.66.10/24",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 externalIP: policy: {}",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: autoAssignCIDRs: [] 1 policy: 2",
"policy: allowedCIDRs: [] 1 rejectedCIDRs: [] 2",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: autoAssignCIDRs: - 192.168.132.254/29",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: allowedCIDRs: - 192.168.132.0/29 - 192.168.132.8/29 rejectedCIDRs: - 192.168.132.7/32",
"oc describe networks.config cluster",
"oc edit networks.config cluster",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: 1",
"oc get networks.config cluster -o go-template='{{.spec.externalIP}}{{\"\\n\"}}'",
"oc adm policy add-cluster-role-to-user cluster-admin username",
"oc new-project <project_name>",
"oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git",
"oc get svc -n <project_name>",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s",
"oc project <project_name>",
"oc expose service nodejs-ex",
"route.route.openshift.io/nodejs-ex exposed",
"oc get route",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None",
"curl --head nodejs-ex-myproject.example.com",
"HTTP/1.1 200 OK",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: finops-router namespace: openshift-ingress-operator spec: namespaceSelector: matchExpressions: - key: name operator: In values: - finance - ops",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: dev-router namespace: openshift-ingress-operator spec: namespaceSelector: matchLabels: name: dev",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: devops-router namespace: openshift-ingress-operator spec: namespaceSelector: matchExpressions: - key: name operator: In values: - dev - ops",
"oc edit ingresscontroller -n openshift-ingress-operator default",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: namespaceSelector: matchExpressions: - key: name operator: NotIn values: - finance - ops - dev",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> 1 nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" routeSelector: matchLabels: type: sharded",
"oc apply -f router-internal.yaml",
"oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net",
"cat router-internal.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> 1 nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" namespaceSelector: matchLabels: type: sharded",
"oc apply -f router-internal.yaml",
"oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net",
"oc new-project hello-openshift",
"oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json",
"oc expose pod/hello-openshift",
"apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded 1 name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift 2 tls: termination: edge to: kind: Service name: hello-openshift",
"oc -n hello-openshift create -f hello-openshift-route.yaml",
"oc -n hello-openshift get routes/hello-openshift-edge -o yaml",
"apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift tls: termination: edge to: kind: Service name: hello-openshift status: ingress: - host: hello-openshift.<apps-sharded.basedomain.example.net> 1 routerCanonicalHostname: router-sharded.<apps-sharded.basedomain.example.net> 2 routerName: sharded 3",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: internal namespace: openshift-ingress-operator spec: domain: example.com endpointPublishingStrategy: type: HostNetwork hostNetwork: httpPort: 80 httpsPort: 443 statsPort: 1936",
"oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\":{\"type\":\"LoadBalancerService\",\"loadBalancer\":{\"scope\":\"Internal\"}}}}'",
"oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml",
"oc -n openshift-ingress delete services/router-default",
"oc -n openshift-ingress-operator patch ingresscontrollers/private --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\":{\"type\":\"LoadBalancerService\",\"loadBalancer\":{\"scope\":\"External\"}}}}'",
"oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml",
"oc -n openshift-ingress delete services/router-default",
"apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: <custom_ic_name> 1 namespace: openshift-ingress-operator spec: replicas: 1 domain: <custom_ic_domain_name> 2 nodePlacement: nodeSelector: matchLabels: <key>: <value> 3 namespaceSelector: matchLabels: <key>: <value> 4 endpointPublishingStrategy: type: NodePortService",
"oc label node <node_name> <key>=<value> 1",
"oc create -f <ingress_controller_cr>.yaml",
"oc get svc -n openshift-ingress",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-internal-default ClusterIP 172.30.195.74 <none> 80/TCP,443/TCP,1936/TCP 223d router-nodeport-custom-ic3 NodePort 172.30.109.219 <none> 80:32432/TCP,443:31366/TCP,1936:30499/TCP 155m",
"oc new-project <project_name>",
"oc label namespace <project_name> <key>=<value> 1",
"oc new-app --image=<image_name> 1",
"oc expose svc/<service_name> --hostname=<svc_name>-<project_name>.<custom_ic_domain_name> 1",
"oc get route/hello-openshift -o json | jq '.status.ingress'",
"{ \"conditions\": [ { \"lastTransitionTime\": \"2024-05-17T18:25:41Z\", \"status\": \"True\", \"type\": \"Admitted\" } ], [ { \"host\": \"hello-openshift.nodeportsvc.ipi-cluster.example.com\", \"routerCanonicalHostname\": \"router-nodeportsvc.nodeportsvc.ipi-cluster.example.com\", \"routerName\": \"nodeportsvc\", \"wildcardPolicy\": \"None\" } ], }",
"oc patch --type=merge -n openshift-ingress-operator ingresscontroller/default --patch '{\"spec\":{\"namespaceSelector\":{\"matchExpressions\":[{\"key\":\"<key>\",\"operator\":\"NotIn\",\"values\":[\"<value>]}]}}}'",
"dig +short <svc_name>-<project_name>.<custom_ic_domain_name>",
"curl <svc_name>-<project_name>.<custom_ic_domain_name>:<port> 1",
"Hello OpenShift!",
"oc adm policy add-cluster-role-to-user cluster-admin username",
"oc new-project <project_name>",
"oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git",
"oc get svc -n <project_name>",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s",
"oc project <project_name>",
"oc expose service nodejs-ex",
"route.route.openshift.io/nodejs-ex exposed",
"oc get route",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None",
"curl --head nodejs-ex-myproject.example.com",
"HTTP/1.1 200 OK",
"oc project project1",
"apiVersion: v1 kind: Service metadata: name: egress-2 1 spec: ports: - name: db port: 3306 2 loadBalancerIP: loadBalancerSourceRanges: 3 - 10.0.0.0/8 - 192.168.0.0/16 type: LoadBalancer 4 selector: name: mysql 5",
"oc create -f <file-name>",
"oc create -f mysql-lb.yaml",
"oc get svc",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE egress-2 LoadBalancer 172.30.22.226 ad42f5d8b303045-487804948.example.com 3306:30357/TCP 15m",
"curl <public-ip>:<port>",
"curl 172.29.121.74:3306",
"mysql -h 172.30.131.89 -u admin -p",
"Enter password: Welcome to the MariaDB monitor. Commands end with ; or \\g. MySQL [(none)]>",
"oc annotate route <route_name> --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1",
"oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\": {\"type\":\"LoadBalancerService\", \"loadBalancer\": {\"scope\":\"External\", \"providerParameters\":{\"type\":\"AWS\", \"aws\": {\"type\":\"Classic\", \"classicLoadBalancer\": {\"connectionIdleTimeout\":\"5m\"}}}}}}}'",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\": {\"loadBalancer\":{\"providerParameters\":{\"aws\":{\"classicLoadBalancer\": {\"connectionIdleTimeout\":null}}}}}}}'",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService",
"oc apply -f ingresscontroller.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: Classic type: LoadBalancerService",
"oc apply -f ingresscontroller.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService",
"oc replace --force --wait -f ingresscontroller.yml",
"oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.type}' AWS",
"cat ingresscontroller-aws-nlb.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: USDmy_ingress_controller 1 namespace: openshift-ingress-operator spec: domain: USDmy_unique_ingress_domain 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External 3 providerParameters: type: AWS aws: type: NLB",
"oc create -f ingresscontroller-aws-nlb.yaml",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1",
"ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml",
"cluster-ingress-default-ingresscontroller.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> spec: domain: <domain> endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External dnsManagementPolicy: Managed",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: <name> 1 namespace: openshift-ingress-operator spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External providerParameters: type: AWS aws: type: Classic classicLoadBalancer: 3 subnets: ids: 4 - <subnet> 5 - <subnet> - <subnet> dnsManagementPolicy: Managed",
"oc apply -f sample-ingress.yaml",
"oc get ingresscontroller -n openshift-ingress-operator <name> -o jsonpath=\"{.status.conditions}\" | yq -PC",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: <name> 1 namespace: openshift-ingress-operator spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External providerParameters: type: AWS aws: type: Classic 3 classicLoadBalancer: 4 subnets: ids: 5 - <updated_subnet> 6 - <updated_subnet> - <updated_subnet>",
"oc get ingresscontroller -n openshift-ingress-operator subnets -o jsonpath=\"{.status.conditions[?(@.type==\\\"Progressing\\\")]}\" | yq -PC",
"lastTransitionTime: \"2024-11-25T20:19:31Z\" message: 'One or more status conditions indicate progressing: LoadBalancerProgressing=True (OperandsProgressing: One or more managed resources are progressing: The IngressController subnets were changed from [...] to [...]. To effectuate this change, you must delete the service: `oc -n openshift-ingress delete svc/router-<name>`; the service load-balancer will then be deprovisioned and a new one created. This will most likely cause the new load-balancer to have a different host name and IP address and cause disruption. To return to the previous state, you can revert the change to the IngressController: [...]' reason: IngressControllerProgressing status: \"True\" type: Progressing",
"oc -n openshift-ingress delete svc/router-<name>",
"oc get ingresscontroller -n openshift-ingress-operator <name> -o jsonpath=\"{.status.conditions}\" | yq -PC",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: loadBalancer: scope: External 3 type: LoadBalancerService providerParameters: type: AWS aws: type: NLB networkLoadBalancer: subnets: 4 ids: - <subnet_ID> names: - <subnet_A> - <subnet_B> eipAllocations: 5 - <eipalloc_A> - <eipalloc_B> - <eipalloc_C>",
"oc apply -f sample-ingress.yaml",
"oc get ingresscontroller -n openshift-ingress-operator <name> -o jsonpath=\"{.status.conditions}\" | yq -PC",
"oc get networks.config cluster -o jsonpath='{.spec.externalIP}{\"\\n\"}'",
"apiVersion: v1 kind: Service metadata: name: svc-with-externalip spec: externalIPs: policy: allowedCIDRs: - 192.168.123.0/28",
"oc patch svc <name> -p '{ \"spec\": { \"externalIPs\": [ \"<ip_address>\" ] } }'",
"oc patch svc mysql-55-rhel7 -p '{\"spec\":{\"externalIPs\":[\"192.174.120.10\"]}}'",
"\"mysql-55-rhel7\" patched",
"oc get svc",
"NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql-55-rhel7 172.30.131.89 192.174.120.10 3306/TCP 13m",
"oc adm policy add-cluster-role-to-user cluster-admin <user_name>",
"oc new-project <project_name>",
"oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git",
"oc get svc -n <project_name>",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s",
"oc project <project_name>",
"oc edit svc <service_name>",
"spec: ports: - name: 8443-tcp nodePort: 30327 1 port: 8443 protocol: TCP targetPort: 8443 sessionAffinity: None type: NodePort 2",
"oc get svc -n myproject",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.217.127 <none> 3306/TCP 9m44s nodejs-ex-ingress NodePort 172.30.107.72 <none> 3306:31345/TCP 39s",
"oc delete svc nodejs-ex",
"oc get svc",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE httpd NodePort 172.xx.xx.xx <none> 8443:30327/TCP 109s",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\": {\"type\":\"LoadBalancerService\", \"loadbalancer\": {\"scope\":\"External\", \"allowedSourceRanges\":[\"0.0.0.0/0\"]}}}}' 1",
"oc get svc router-default -n openshift-ingress -o yaml",
"apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/load-balancer-source-ranges: 192.168.0.1/32",
"oc get svc router-default -n openshift-ingress -o yaml",
"spec: loadBalancerSourceRanges: - 0.0.0.0/0",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\": {\"loadBalancer\":{\"allowedSourceRanges\":[\"0.0.0.0/0\"]}}}}' 1",
"oc get ingressclass",
"oc get ingress -A",
"oc patch ingress/<ingress_name> --type=merge --patch '{\"spec\":{\"ingressClassName\":\"openshift-default\"}}'",
"oc get nns",
"oc get nns node01 -o yaml",
"apiVersion: nmstate.io/v1 kind: NodeNetworkState metadata: name: node01 1 status: currentState: 2 dns-resolver: interfaces: route-rules: routes: lastSuccessfulUpdateTime: \"2020-01-31T12:14:00Z\" 3",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: worker-0-ipoib spec: interfaces: - description: \"\" infiniband: mode: datagram 1 pkey: \"0xffff\" 2 ipv4: address: - ip: 100.125.3.4 prefix-length: 16 dhcp: false enabled: true ipv6: enabled: false name: ibp27s0 state: up type: infiniband 3",
"oc apply -f <nncp_file_name> 1",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 maxUnavailable: 3 4 desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port 5 type: linux-bridge state: up ipv4: dhcp: true enabled: true auto-dns: false bridge: options: stp: enabled: false port: - name: eth1 dns-resolver: 6 config: search: - example.com - example.org server: - 8.8.8.8",
"oc apply -f br1-eth1-policy.yaml 1",
"oc get nncp",
"oc get nncp <policy> -o yaml",
"oc get nnce",
"oc get nnce <node>.<policy> -o yaml",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 desiredState: interfaces: - name: br1 type: linux-bridge state: absent 4 - name: eth1 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9",
"oc apply -f <br1-eth1-policy.yaml> 1",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: br1 4 description: Linux bridge with eth1 as a port 5 type: linux-bridge 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 bridge: options: stp: enabled: false 10 port: - name: eth1 11",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vlan-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1.102 4 description: VLAN using eth1 5 type: vlan 6 state: up 7 vlan: base-iface: eth1 8 id: 102 9",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: qos 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 desiredState: interfaces: - name: ens1f0 4 description: Change QOS on VF0 5 type: ethernet 6 state: up 7 ethernet: sr-iov: total-vfs: 3 8 vfs: - id: 0 9 max-tx-rate: 200 10",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: addvf 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 maxUnavailable: 3 desiredState: interfaces: - name: ens1f1 4 type: ethernet state: up ethernet: sr-iov: total-vfs: 1 5 vfs: - id: 0 trust: true 6 vlan-id: 477 7 - name: bond0 8 description: Attach VFs to bond 9 type: bond 10 state: up 11 link-aggregation: mode: active-backup 12 options: primary: ens1f0v0 13 port: 14 - ens1f0v0 - ens1f1v0 15",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: bond0-eth1-eth2-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond0 4 description: Bond with ports eth1 and eth2 5 type: bond 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 link-aggregation: mode: active-backup 10 options: miimon: '140' 11 port: 12 - eth1 - eth2 mtu: 1450 13",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1 4 description: Configuring eth1 on node01 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: bond-vlan 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond10 4 description: Bonding eth2 and eth3 5 type: bond 6 state: up 7 link-aggregation: mode: balance-xor 8 options: miimon: '140' 9 port: 10 - eth2 - eth3 - name: bond10.103 11 description: vlan using bond10 12 type: vlan 13 state: up 14 vlan: base-iface: bond10 15 id: 103 16 ipv4: dhcp: true 17 enabled: true 18",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vrfpolicy 1 spec: nodeSelector: vrf: \"true\" 2 maxUnavailable: 3 desiredState: interfaces: - name: ens4vrf 3 type: vrf 4 state: up vrf: port: - ens4 5 route-table-id: 2 6",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-copy-ipv4-policy 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" capture: eth1-nic: interfaces.name==\"eth1\" 3 eth1-routes: routes.running.next-hop-interface==\"eth1\" br1-routes: capture.eth1-routes | routes.running.next-hop-interface := \"br1\" desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port type: linux-bridge 4 state: up ipv4: \"{{ capture.eth1-nic.interfaces.0.ipv4 }}\" 5 bridge: options: stp: enabled: false port: - name: eth1 6 routes: config: \"{{ capture.br1-routes.routes.running }}\"",
"interfaces: - name: eth1 description: static IP on eth1 type: ethernet state: up ipv4: dhcp: false address: - ip: 192.168.122.250 1 prefix-length: 24 enabled: true",
"interfaces: - name: eth1 description: No IP on eth1 type: ethernet state: up ipv4: enabled: false",
"interfaces: - name: eth1 description: DHCP on eth1 type: ethernet state: up ipv4: dhcp: true enabled: true",
"interfaces: - name: eth1 description: DHCP without gateway or DNS on eth1 type: ethernet state: up ipv4: dhcp: true auto-gateway: false auto-dns: false enabled: true",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: worker-0-dns-testing spec: nodeSelector: kubernetes.io/hostname: <target_node> desiredState: dns-resolver: config: search: - example.com - example.org server: - 2001:db8:f::1 - 192.0.2.251",
"desiredState: dns-resolver: config: search: options: - timeout:2 - attempts:3",
"dns-resolver: config: {} interfaces: []",
"dns-resolver: config: server: - 192.0.2.1 interfaces: - name: eth1 type: ethernet state: up ipv4: enabled: true dhcp: true auto-dns: true",
"dns-resolver: config: search: - example.com - example.org server: - 2001:db8:1::d1 - 2001:db8:1::d2 - 192.0.2.1 interfaces: - name: eth1 type: ethernet state: up ipv4: address: - ip: 192.0.2.251 prefix-length: 24 dhcp: false enabled: true ipv6: address: - ip: 2001:db8:1::1 prefix-length: 64 dhcp: false enabled: true autoconf: false",
"dns-resolver: config: search: - example.com - example.org server: [] interfaces: - name: eth1 type: ethernet state: up ipv4: enabled: true dhcp: true auto-dns: true ipv6: enabled: true dhcp: true autoconf: true auto-dns: true",
"dns-resolver: config: interfaces: - name: eth1 description: Static routing on eth1 type: ethernet state: up ipv4: dhcp: false enabled: true address: - ip: 192.0.2.251 1 prefix-length: 24 routes: config: - destination: 198.51.100.0/24 metric: 150 next-hop-address: 192.0.2.1 2 next-hop-interface: eth1 table-id: 254",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ens01-bridge-testfail spec: desiredState: interfaces: - name: br1 description: Linux bridge with the wrong port type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: ens01",
"oc apply -f ens01-bridge-testfail.yaml",
"nodenetworkconfigurationpolicy.nmstate.io/ens01-bridge-testfail created",
"oc get nncp",
"NAME STATUS ens01-bridge-testfail FailedToConfigure",
"oc get nnce",
"NAME STATUS control-plane-1.ens01-bridge-testfail FailedToConfigure control-plane-2.ens01-bridge-testfail FailedToConfigure control-plane-3.ens01-bridge-testfail FailedToConfigure compute-1.ens01-bridge-testfail FailedToConfigure compute-2.ens01-bridge-testfail FailedToConfigure compute-3.ens01-bridge-testfail FailedToConfigure",
"oc get nnce compute-1.ens01-bridge-testfail -o jsonpath='{.status.conditions[?(@.type==\"Failing\")].message}'",
"[2024-10-10T08:40:46Z INFO nmstatectl] Nmstate version: 2.2.37 NmstateError: InvalidArgument: Controller interface br1 is holding unknown port ens01",
"oc get nns control-plane-1 -o yaml",
"- ipv4: name: ens1 state: up type: ethernet",
"oc edit nncp ens01-bridge-testfail",
"port: - name: ens1",
"oc get nncp",
"NAME STATUS ens01-bridge-testfail SuccessfullyConfigured",
"cat >> /etc/named.conf <<EOF zone \"root-servers.net\" IN { type master; file \"named.localhost\"; }; EOF",
"systemctl restart named",
"journalctl -u named|grep root-servers.net",
"Jul 03 15:16:26 rhel-8-10 bash[xxxx]: zone root-servers.net/IN: loaded serial 0 Jul 03 15:16:26 rhel-8-10 named[xxxx]: zone root-servers.net/IN: loaded serial 0",
"host -t NS root-servers.net. 127.0.0.1",
"Using domain server: Name: 127.0.0.1 Address: 127.0.0.53 Aliases: root-servers.net name server root-servers.net.",
"echo 'server=/root-servers.net/<DNS_server_IP>'> /etc/dnsmasq.d/delegate-root-servers.net.conf",
"systemctl restart dnsmasq",
"journalctl -u dnsmasq|grep root-servers.net",
"Jul 03 15:31:25 rhel-8-10 dnsmasq[1342]: using nameserver 192.168.1.1#53 for domain root-servers.net",
"host -t NS root-servers.net. 127.0.0.1",
"Using domain server: Name: 127.0.0.1 Address: 127.0.0.1#53 Aliases: root-servers.net name server root-servers.net.",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster networking: clusterNetwork: 1 - cidr: <ip_address_from_cidr> hostPrefix: 23 network type: OVNKubernetes machineNetwork: 2 - cidr: <ip_address_from_cidr> serviceNetwork: 3 - 172.30.0.0/16 status: noProxy: - localhost - .cluster.local - .svc - 127.0.0.1 - <api_server_internal_url> 4",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:",
"apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4",
"oc create -f user-ca-bundle.yaml",
"oc edit proxy/cluster",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5",
"oc edit proxy/cluster",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: {}",
"oc get proxy/cluster -o yaml",
"oc get proxy/cluster -o jsonpath='{.status}'",
"{ status: httpProxy: http://user:xxx@xxxx:3128 httpsProxy: http://user:xxx@xxxx:3128 noProxy: .cluster.local,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,localhost,test.no-proxy.com }",
"oc logs -n openshift-machine-config-operator USD(oc get pods -n openshift-machine-config-operator -l k8s-app=machine-config-operator -o name)",
"oc logs -n openshift-cluster-version USD(oc get pods -n openshift-cluster-version -l k8s-app=machine-config-operator -o name)",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:",
"apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4",
"oc create -f user-ca-bundle.yaml",
"oc edit proxy/cluster",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5",
"config.openshift.io/inject-trusted-cabundle=\"true\"",
"apiVersion: v1 data: {} kind: ConfigMap metadata: labels: config.openshift.io/inject-trusted-cabundle: \"true\" name: ca-inject 1 namespace: apache",
"apiVersion: apps/v1 kind: Deployment metadata: name: my-example-custom-ca-deployment namespace: my-example-custom-ca-ns spec: spec: containers: - name: my-container-that-needs-custom-ca volumeMounts: - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true volumes: - name: trusted-ca configMap: name: ca-inject items: - key: ca-bundle.crt 1 path: tls-ca-bundle.pem 2",
"openstack loadbalancer create --name API_OCP_CLUSTER --vip-subnet-id <id_of_worker_vms_subnet>",
"openstack loadbalancer listener create --name API_OCP_CLUSTER_6443 --protocol HTTPS--protocol-port 6443 API_OCP_CLUSTER",
"openstack loadbalancer pool create --name API_OCP_CLUSTER_pool_6443 --lb-algorithm ROUND_ROBIN --session-persistence type=<source_IP_address> --listener API_OCP_CLUSTER_6443 --protocol HTTPS",
"openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP API_OCP_CLUSTER_pool_6443",
"for SERVER in USD(MASTER-0-IP MASTER-1-IP MASTER-2-IP) do openstack loadbalancer member create --address USDSERVER --protocol-port 6443 API_OCP_CLUSTER_pool_6443 done",
"openstack floating ip unset USDAPI_FIP",
"openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value API_OCP_CLUSTER) USDAPI_FIP",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"listen api-server-6443 bind *:6443 mode tcp server master-00 192.168.83.89:6443 check inter 1s server master-01 192.168.84.90:6443 check inter 1s server master-02 192.168.85.99:6443 check inter 1s server bootstrap 192.168.80.89:6443 check inter 1s listen machine-config-server-22623 bind *:22623 mode tcp server master-00 192.168.83.89:22623 check inter 1s server master-01 192.168.84.90:22623 check inter 1s server master-02 192.168.85.99:22623 check inter 1s server bootstrap 192.168.80.89:22623 check inter 1s listen ingress-router-80 bind *:80 mode tcp balance source server worker-00 192.168.83.100:80 check inter 1s server worker-01 192.168.83.101:80 check inter 1s listen ingress-router-443 bind *:443 mode tcp balance source server worker-00 192.168.83.100:443 check inter 1s server worker-01 192.168.83.101:443 check inter 1s listen ironic-api-6385 bind *:6385 mode tcp balance source server master-00 192.168.83.89:6385 check inter 1s server master-01 192.168.84.90:6385 check inter 1s server master-02 192.168.85.99:6385 check inter 1s server bootstrap 192.168.80.89:6385 check inter 1s listen inspector-api-5050 bind *:5050 mode tcp balance source server master-00 192.168.83.89:5050 check inter 1s server master-01 192.168.84.90:5050 check inter 1s server master-02 192.168.85.99:5050 check inter 1s server bootstrap 192.168.80.89:5050 check inter 1s",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"platform: openstack: loadBalancer: type: UserManaged 1 apiVIPs: - <api_ip> 2 ingressVIPs: - <ingress_ip> 3",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External 3 providerParameters: type: OpenStack openstack: floatingIP: <ingress_port_IP> 4",
"oc apply -f sample-ingress.yaml",
"*.apps.<name>.<domain>. IN A <ingress_port_IP>",
"oc get ingresscontroller -n openshift-ingress-operator <name> -o jsonpath=\"{.status.conditions}\" | yq -PC",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example labels: 1 zone: east spec: addresses: - 203.0.113.1-203.0.113.10 - 203.0.113.65-203.0.113.75",
"oc apply -f ipaddresspool.yaml",
"oc describe -n metallb-system IPAddressPool doc-example",
"Name: doc-example Namespace: metallb-system Labels: zone=east Annotations: <none> API Version: metallb.io/v1beta1 Kind: IPAddressPool Metadata: Spec: Addresses: 203.0.113.1-203.0.113.10 203.0.113.65-203.0.113.75 Auto Assign: true Events: <none>",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-vlan labels: zone: east 1 spec: addresses: - 192.168.100.1-192.168.100.254 2",
"oc apply -f ipaddresspool-vlan.yaml",
"oc edit network.config.openshift/cluster",
"spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: gatewayConfig: ipForwarding: Global",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-cidr namespace: metallb-system spec: addresses: - 192.168.100.0/24 - 192.168.200.0/24 - 192.168.255.1-192.168.255.5",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-reserved namespace: metallb-system spec: addresses: - 10.0.100.0/28 autoAssign: false",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-combined namespace: metallb-system spec: addresses: - 10.0.100.0/28 - 2002:2:2::1-2002:2:2::100",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-service-allocation namespace: metallb-system spec: addresses: - 192.168.20.0/24 serviceAllocation: priority: 50 1 namespaces: 2 - namespace-a - namespace-b namespaceSelectors: 3 - matchLabels: zone: east serviceSelectors: 4 - matchExpressions: - key: security operator: In values: - S1",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-basic spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-basic namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-basic",
"oc apply -f bgpadvertisement.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-adv labels: zone: east spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 autoAssign: false",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-1 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 65535:65282 aggregationLength: 32 localPref: 100",
"oc apply -f bgpadvertisement1.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-2 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 8000:800 aggregationLength: 30 aggregationLengthV6: 124",
"oc apply -f bgpadvertisement2.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: example spec: ipAddressPools: - pool1 nodeSelector: - matchLabels: kubernetes.io/hostname: NodeA - matchLabels: kubernetes.io/hostname: NodeB",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2",
"oc apply -f l2advertisement.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2-label labels: zone: east spec: addresses: - 172.31.249.87/32",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement-label namespace: metallb-system spec: ipAddressPoolSelectors: - matchExpressions: - key: zone operator: In values: - east",
"oc apply -f l2advertisement.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2 interfaces: - interfaceA - interfaceB",
"oc apply -f l2advertisement.yaml",
"oc patch network.operator cluster -p '{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\":{\"gatewayConfig\": {\"routingViaHost\": true} }}}}' --type=merge",
"echo -e \"net.ipv4.conf.bridge-net.forwarding = 1\\nnet.ipv6.conf.bridge-net.forwarding = 1\\nnet.ipv4.conf.bridge-net.rp_filter = 0\\nnet.ipv6.conf.bridge-net.rp_filter = 0\" | base64 -w0",
"bmV0LmlwdjQuY29uZi5icmlkZ2UtbmV0LmZvcndhcmRpbmcgPSAxCm5ldC5pcHY2LmNvbmYuYnJpZGdlLW5ldC5mb3J3YXJkaW5nID0gMQpuZXQuaXB2NC5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMApuZXQuaXB2Ni5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMAo=",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: <node_role> 1 name: 81-enable-global-forwarding spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,bmV0LmlwdjQuY29uZi5icmlkZ2UtbmV0LmZvcndhcmRpbmcgPSAxCm5ldC5pcHY2LmNvbmYuYnJpZGdlLW5ldC5mb3J3YXJkaW5nID0gMQpuZXQuaXB2NC5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMApuZXQuaXB2Ni5jb25mLmJyaWRnZS1uZXQucnBfZmlsdGVyID0gMAo= 2 verification: {} filesystem: root mode: 644 path: /etc/sysctl.d/enable-global-forwarding.conf osImageURL: \"\"",
"oc apply -f enable-ip-forward.yaml",
"oc debug node/<node-name>",
"chroot /host",
"cat /etc/sysctl.d/enable-global-forwarding.conf",
"net.ipv4.conf.bridge-net.forwarding = 1 net.ipv6.conf.bridge-net.forwarding = 1 net.ipv4.conf.bridge-net.rp_filter = 0 net.ipv6.conf.bridge-net.rp_filter = 0",
"oc patch network.operator cluster -p '{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\":{\"gatewayConfig\":{\"ipForwarding\": \"Global\"}}}}}",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10",
"oc apply -f bgppeer.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400",
"oc apply -f ipaddresspool1.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool2 spec: addresses: - 5.5.5.100-5.5.5.200 - 2001:100:5::200-2001:100:5::400",
"oc apply -f ipaddresspool2.yaml",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer1 spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10",
"oc apply -f bgppeer1.yaml",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer2 spec: peerAddress: 10.0.0.2 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10",
"oc apply -f bgppeer2.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-1 namespace: metallb-system spec: ipAddressPools: - pool1 peers: - peer1 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100",
"oc apply -f bgpadvertisement1.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-2 namespace: metallb-system spec: ipAddressPools: - pool2 peers: - peer2 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100",
"oc apply -f bgpadvertisement2.yaml",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: frrviavrf namespace: metallb-system spec: myASN: 100 peerASN: 200 peerAddress: 192.168.130.1 vrf: ens4vrf 1",
"oc apply -f frrviavrf.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.169.10.0/32",
"oc apply -f first-pool.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: first-adv namespace: metallb-system spec: ipAddressPools: - first-pool peers: - frrviavrf 1",
"oc apply -f first-adv.yaml",
"apiVersion: v1 kind: Namespace metadata: name: test --- apiVersion: apps/v1 kind: Deployment metadata: name: server namespace: test spec: selector: matchLabels: app: server template: metadata: labels: app: server spec: containers: - name: server image: registry.redhat.io/ubi9/ubi ports: - name: http containerPort: 30100 command: [\"/bin/sh\", \"-c\"] args: [\"sleep INF\"] --- apiVersion: v1 kind: Service metadata: name: server1 namespace: test spec: ports: - name: http port: 30100 protocol: TCP targetPort: 30100 selector: app: server type: LoadBalancer",
"oc apply -f deploy-service.yaml",
"oc get -n metallb-system pods -l component=speaker",
"NAME READY STATUS RESTARTS AGE speaker-c6c5f 6/6 Running 0 69m",
"oc exec -n metallb-system <speaker_pod> -c frr -- vtysh -c \"show bgp vrf <vrf_name> neigh\"",
"BGP neighbor is 192.168.30.1, remote AS 200, local AS 100, external link BGP version 4, remote router ID 192.168.30.1, local router ID 192.168.30.71 BGP state = Established, up for 04:20:09",
"oc exec -n metallb-system <speaker_pod> -c frr -- vtysh -c \"show bgp vrf <vrf_name> ipv4\"",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-nodesel namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 nodeSelectors: - matchExpressions: - key: kubernetes.io/hostname operator: In values: [compute-1.example.com, compute-2.example.com]",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-peer-bfd namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 holdTime: \"10s\" bfdProfile: doc-example-bfd-profile-full",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv4 namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64500 myASN: 64500 --- apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv6 namespace: metallb-system spec: peerAddress: 2620:52:0:88::104 peerASN: 64500 myASN: 64500",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-community spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: Community metadata: name: community1 namespace: metallb-system spec: communities: - name: NO_ADVERTISE value: '65535:65282'",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-bgp-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10",
"oc apply -f bgppeer.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgp-community-sample namespace: metallb-system spec: aggregationLength: 32 aggregationLengthV6: 128 communities: - NO_ADVERTISE 1 ipAddressPools: - doc-example-bgp-community peers: - doc-example-peer",
"oc apply -f bgpadvertisement.yaml",
"apiVersion: metallb.io/v1beta1 kind: BFDProfile metadata: name: doc-example-bfd-profile-full namespace: metallb-system spec: receiveInterval: 300 transmitInterval: 300 detectMultiplier: 3 echoMode: false passiveMode: true minimumTtl: 254",
"oc apply -f bfdprofile.yaml",
"apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.io/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer loadBalancerIP: <ip_address>",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning AllocationFailed 3m16s metallb-controller Failed to allocate IP for \"default/invalid-request\": \"4.3.2.1\" is not allowed in config",
"apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.io/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer",
"apiVersion: v1 kind: Service metadata: name: <service_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer",
"apiVersion: v1 kind: Service metadata: name: service-http annotations: metallb.io/address-pool: doc-example metallb.io/allow-shared-ip: \"web-server-svc\" 1 spec: ports: - name: http port: 80 2 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 3 type: LoadBalancer loadBalancerIP: 172.31.249.7 4 --- apiVersion: v1 kind: Service metadata: name: service-https annotations: metallb.io/address-pool: doc-example metallb.io/allow-shared-ip: \"web-server-svc\" spec: ports: - name: https port: 443 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> type: LoadBalancer loadBalancerIP: 172.31.249.7",
"oc apply -f <service_name>.yaml",
"service/<service_name> created",
"oc describe service <service_name>",
"Name: <service_name> Namespace: default Labels: <none> Annotations: metallb.io/address-pool: doc-example 1 Selector: app=service_name Type: LoadBalancer 2 IP Family Policy: SingleStack IP Families: IPv4 IP: 10.105.237.254 IPs: 10.105.237.254 LoadBalancer Ingress: 192.168.100.5 3 Port: <unset> 80/TCP TargetPort: 8080/TCP NodePort: <unset> 30550/TCP Endpoints: 10.244.0.50:8080 Session Affinity: None External Traffic Policy: Cluster Events: 4 Type Reason Age From Message ---- ------ ---- ---- ------- Normal nodeAssigned 32m (x2 over 32m) metallb-speaker announcing from node \"<node_name>\"",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vrfpolicy 1 spec: nodeSelector: vrf: \"true\" 2 maxUnavailable: 3 desiredState: interfaces: - name: ens4vrf 3 type: vrf 4 state: up vrf: port: - ens4 5 route-table-id: 2 6 - name: ens4 7 type: ethernet state: up ipv4: address: - ip: 192.168.130.130 prefix-length: 24 dhcp: false enabled: true routes: 8 config: - destination: 0.0.0.0/0 metric: 150 next-hop-address: 192.168.130.1 next-hop-interface: ens4 table-id: 2 route-rules: 9 config: - ip-to: 172.30.0.0/16 priority: 998 route-table: 254 10 - ip-to: 10.132.0.0/14 priority: 998 route-table: 254 - ip-to: 169.254.0.0/17 priority: 998 route-table: 254",
"oc apply -f node-network-vrf.yaml",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: frrviavrf namespace: metallb-system spec: myASN: 100 peerASN: 200 peerAddress: 192.168.130.1 vrf: ens4vrf 1",
"oc apply -f frr-via-vrf.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.169.10.0/32",
"oc apply -f first-pool.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: first-adv namespace: metallb-system spec: ipAddressPools: - first-pool peers: - frrviavrf 1 nodeSelectors: - matchLabels: egress-service.k8s.ovn.org/test-server1: \"\" 2",
"oc apply -f first-adv.yaml",
"apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: server1 1 namespace: test 2 spec: sourceIPBy: \"LoadBalancerIP\" 3 nodeSelector: matchLabels: vrf: \"true\" 4 network: \"2\" 5",
"oc apply -f egress-service.yaml",
"curl <external_ip_address>:<port_number> 1",
"apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: metallb-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.18.0.5 asn: 64512 toReceive: allowed: mode: all",
"apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: bgpBackend: frr-k8s frrk8sConfig: alwaysBlock: - 192.168.1.0/24",
"oc describe network.config/cluster",
"apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 4200000000 ebgpMultiHop: true port: 180 - address: 172.18.0.6 asn: 4200000000 port: 179",
"apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 4200000000 ebgpMultiHop: true port: 180 toAdvertise: allowed: prefixes: 1 - 192.168.2.0/24 prefixes: - 192.168.2.0/24 - 192.169.2.0/24",
"apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 4200000000 ebgpMultiHop: true port: 180 toAdvertise: allowed: mode: all 1 prefixes: - 192.168.2.0/24 - 192.169.2.0/24",
"apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.18.0.5 asn: 64512 port: 179 toReceive: allowed: prefixes: - prefix: 192.168.1.0/24 - prefix: 192.169.2.0/24 ge: 25 1 le: 28 2",
"apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.18.0.5 asn: 64512 port: 179 toReceive: allowed: mode: all",
"apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 64512 port: 180 bfdProfile: defaultprofile bfdProfiles: - name: defaultprofile",
"apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 nodeSelector: labelSelector: foo: \"bar\"",
"apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug nodeSelector: node-role.kubernetes.io/worker: \"\"",
"oc replace -f setdebugloglevel.yaml",
"oc get -n metallb-system pods -l component=speaker",
"NAME READY STATUS RESTARTS AGE speaker-2m9pm 4/4 Running 0 9m19s speaker-7m4qw 3/4 Running 0 19s speaker-szlmx 4/4 Running 0 9m19s",
"oc logs -n metallb-system speaker-7m4qw -c speaker",
"{\"branch\":\"main\",\"caller\":\"main.go:92\",\"commit\":\"3d052535\",\"goversion\":\"gc / go1.17.1 / amd64\",\"level\":\"info\",\"msg\":\"MetalLB speaker starting (commit 3d052535, branch main)\",\"ts\":\"2022-05-17T09:55:05Z\",\"version\":\"\"} {\"caller\":\"announcer.go:110\",\"event\":\"createARPResponder\",\"interface\":\"ens4\",\"level\":\"info\",\"msg\":\"created ARP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:119\",\"event\":\"createNDPResponder\",\"interface\":\"ens4\",\"level\":\"info\",\"msg\":\"created NDP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:110\",\"event\":\"createARPResponder\",\"interface\":\"tun0\",\"level\":\"info\",\"msg\":\"created ARP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:119\",\"event\":\"createNDPResponder\",\"interface\":\"tun0\",\"level\":\"info\",\"msg\":\"created NDP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} I0517 09:55:06.515686 95 request.go:665] Waited for 1.026500832s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/operators.coreos.com/v1alpha1?timeout=32s {\"Starting Manager\":\"(MISSING)\",\"caller\":\"k8s.go:389\",\"level\":\"info\",\"ts\":\"2022-05-17T09:55:08Z\"} {\"caller\":\"speakerlist.go:310\",\"level\":\"info\",\"msg\":\"node event - forcing sync\",\"node addr\":\"10.0.128.4\",\"node event\":\"NodeJoin\",\"node name\":\"ci-ln-qb8t3mb-72292-7s7rh-worker-a-vvznj\",\"ts\":\"2022-05-17T09:55:08Z\"} {\"caller\":\"service_controller.go:113\",\"controller\":\"ServiceReconciler\",\"enqueueing\":\"openshift-kube-controller-manager-operator/metrics\",\"epslice\":\"{\\\"metadata\\\":{\\\"name\\\":\\\"metrics-xtsxr\\\",\\\"generateName\\\":\\\"metrics-\\\",\\\"namespace\\\":\\\"openshift-kube-controller-manager-operator\\\",\\\"uid\\\":\\\"ac6766d7-8504-492c-9d1e-4ae8897990ad\\\",\\\"resourceVersion\\\":\\\"9041\\\",\\\"generation\\\":4,\\\"creationTimestamp\\\":\\\"2022-05-17T07:16:53Z\\\",\\\"labels\\\":{\\\"app\\\":\\\"kube-controller-manager-operator\\\",\\\"endpointslice.kubernetes.io/managed-by\\\":\\\"endpointslice-controller.k8s.io\\\",\\\"kubernetes.io/service-name\\\":\\\"metrics\\\"},\\\"annotations\\\":{\\\"endpoints.kubernetes.io/last-change-trigger-time\\\":\\\"2022-05-17T07:21:34Z\\\"},\\\"ownerReferences\\\":[{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Service\\\",\\\"name\\\":\\\"metrics\\\",\\\"uid\\\":\\\"0518eed3-6152-42be-b566-0bd00a60faf8\\\",\\\"controller\\\":true,\\\"blockOwnerDeletion\\\":true}],\\\"managedFields\\\":[{\\\"manager\\\":\\\"kube-controller-manager\\\",\\\"operation\\\":\\\"Update\\\",\\\"apiVersion\\\":\\\"discovery.k8s.io/v1\\\",\\\"time\\\":\\\"2022-05-17T07:20:02Z\\\",\\\"fieldsType\\\":\\\"FieldsV1\\\",\\\"fieldsV1\\\":{\\\"f:addressType\\\":{},\\\"f:endpoints\\\":{},\\\"f:metadata\\\":{\\\"f:annotations\\\":{\\\".\\\":{},\\\"f:endpoints.kubernetes.io/last-change-trigger-time\\\":{}},\\\"f:generateName\\\":{},\\\"f:labels\\\":{\\\".\\\":{},\\\"f:app\\\":{},\\\"f:endpointslice.kubernetes.io/managed-by\\\":{},\\\"f:kubernetes.io/service-name\\\":{}},\\\"f:ownerReferences\\\":{\\\".\\\":{},\\\"k:{\\\\\\\"uid\\\\\\\":\\\\\\\"0518eed3-6152-42be-b566-0bd00a60faf8\\\\\\\"}\\\":{}}},\\\"f:ports\\\":{}}}]},\\\"addressType\\\":\\\"IPv4\\\",\\\"endpoints\\\":[{\\\"addresses\\\":[\\\"10.129.0.7\\\"],\\\"conditions\\\":{\\\"ready\\\":true,\\\"serving\\\":true,\\\"terminating\\\":false},\\\"targetRef\\\":{\\\"kind\\\":\\\"Pod\\\",\\\"namespace\\\":\\\"openshift-kube-controller-manager-operator\\\",\\\"name\\\":\\\"kube-controller-manager-operator-6b98b89ddd-8d4nf\\\",\\\"uid\\\":\\\"dd5139b8-e41c-4946-a31b-1a629314e844\\\",\\\"resourceVersion\\\":\\\"9038\\\"},\\\"nodeName\\\":\\\"ci-ln-qb8t3mb-72292-7s7rh-master-0\\\",\\\"zone\\\":\\\"us-central1-a\\\"}],\\\"ports\\\":[{\\\"name\\\":\\\"https\\\",\\\"protocol\\\":\\\"TCP\\\",\\\"port\\\":8443}]}\",\"level\":\"debug\",\"ts\":\"2022-05-17T09:55:08Z\"}",
"oc logs -n metallb-system speaker-7m4qw -c frr",
"Started watchfrr 2022/05/17 09:55:05 ZEBRA: client 16 says hello and bids fair to announce only bgp routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 31 says hello and bids fair to announce only vnc routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 38 says hello and bids fair to announce only static routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 43 says hello and bids fair to announce only bfd routes vrf=0 2022/05/17 09:57:25.089 BGP: Creating Default VRF, AS 64500 2022/05/17 09:57:25.090 BGP: dup addr detect enable max_moves 5 time 180 freeze disable freeze_time 0 2022/05/17 09:57:25.090 BGP: bgp_get: Registering BGP instance (null) to zebra 2022/05/17 09:57:25.090 BGP: Registering VRF 0 2022/05/17 09:57:25.091 BGP: Rx Router Id update VRF 0 Id 10.131.0.1/32 2022/05/17 09:57:25.091 BGP: RID change : vrf VRF default(0), RTR ID 10.131.0.1 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF br0 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF ens4 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF ens4 addr 10.0.128.4/32 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF ens4 addr fe80::c9d:84da:4d86:5618/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF lo 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF ovs-system 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF tun0 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF tun0 addr 10.131.0.1/23 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF tun0 addr fe80::40f1:d1ff:feb6:5322/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth2da49fed 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth2da49fed addr fe80::24bd:d1ff:fec1:d88/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth2fa08c8c 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth2fa08c8c addr fe80::6870:ff:fe96:efc8/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth41e356b7 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth41e356b7 addr fe80::48ff:37ff:fede:eb4b/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth1295c6e2 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth1295c6e2 addr fe80::b827:a2ff:feed:637/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth9733c6dc 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth9733c6dc addr fe80::3cf4:15ff:fe11:e541/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth336680ea 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth336680ea addr fe80::94b1:8bff:fe7e:488c/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vetha0a907b7 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vetha0a907b7 addr fe80::3855:a6ff:fe73:46c3/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vethf35a4398 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vethf35a4398 addr fe80::40ef:2fff:fe57:4c4d/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vethf831b7f4 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vethf831b7f4 addr fe80::f0d9:89ff:fe7c:1d32/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vxlan_sys_4789 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vxlan_sys_4789 addr fe80::80c1:82ff:fe4b:f078/64 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] Timer (start timer expire). 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] BGP_Start (Idle->Connect), fd -1 2022/05/17 09:57:26.094 BGP: Allocated bnc 10.0.0.1/32(0)(VRF default) peer 0x7f807f7631a0 2022/05/17 09:57:26.094 BGP: sendmsg_zebra_rnh: sending cmd ZEBRA_NEXTHOP_REGISTER for 10.0.0.1/32 (vrf VRF default) 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] Waiting for NHT 2022/05/17 09:57:26.094 BGP: bgp_fsm_change_status : vrf default(0), Status: Connect established_peers 0 2022/05/17 09:57:26.094 BGP: 10.0.0.1 went from Idle to Connect 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] TCP_connection_open_failed (Connect->Active), fd -1 2022/05/17 09:57:26.094 BGP: bgp_fsm_change_status : vrf default(0), Status: Active established_peers 0 2022/05/17 09:57:26.094 BGP: 10.0.0.1 went from Connect to Active 2022/05/17 09:57:26.094 ZEBRA: rnh_register msg from client bgp: hdr->length=8, type=nexthop vrf=0 2022/05/17 09:57:26.094 ZEBRA: 0: Add RNH 10.0.0.1/32 type Nexthop 2022/05/17 09:57:26.094 ZEBRA: 0:10.0.0.1/32: Evaluate RNH, type Nexthop (force) 2022/05/17 09:57:26.094 ZEBRA: 0:10.0.0.1/32: NH has become unresolved 2022/05/17 09:57:26.094 ZEBRA: 0: Client bgp registers for RNH 10.0.0.1/32 type Nexthop 2022/05/17 09:57:26.094 BGP: VRF default(0): Rcvd NH update 10.0.0.1/32(0) - metric 0/0 #nhops 0/0 flags 0x6 2022/05/17 09:57:26.094 BGP: NH update for 10.0.0.1/32(0)(VRF default) - flags 0x6 chgflags 0x0 - evaluate paths 2022/05/17 09:57:26.094 BGP: evaluate_paths: Updating peer (10.0.0.1(VRF default)) status with NHT 2022/05/17 09:57:30.081 ZEBRA: Event driven route-map update triggered 2022/05/17 09:57:30.081 ZEBRA: Event handler for route-map: 10.0.0.1-out 2022/05/17 09:57:30.081 ZEBRA: Event handler for route-map: 10.0.0.1-in 2022/05/17 09:57:31.104 ZEBRA: netlink_parse_info: netlink-listen (NS 0) type RTM_NEWNEIGH(28), len=76, seq=0, pid=0 2022/05/17 09:57:31.104 ZEBRA: Neighbor Entry received is not on a VLAN or a BRIDGE, ignoring 2022/05/17 09:57:31.105 ZEBRA: netlink_parse_info: netlink-listen (NS 0) type RTM_NEWNEIGH(28), len=76, seq=0, pid=0 2022/05/17 09:57:31.105 ZEBRA: Neighbor Entry received is not on a VLAN or a BRIDGE, ignoring",
"oc get -n metallb-system pods -l component=speaker",
"NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 56m speaker-gvfnf 4/4 Running 0 56m",
"oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show running-config\"",
"Building configuration Current configuration: ! frr version 7.5.1_git frr defaults traditional hostname some-hostname log file /etc/frr/frr.log informational log timestamp precision 3 service integrated-vtysh-config ! router bgp 64500 1 bgp router-id 10.0.1.2 no bgp ebgp-requires-policy no bgp default ipv4-unicast no bgp network import-check neighbor 10.0.2.3 remote-as 64500 2 neighbor 10.0.2.3 bfd profile doc-example-bfd-profile-full 3 neighbor 10.0.2.3 timers 5 15 neighbor 10.0.2.4 remote-as 64500 neighbor 10.0.2.4 bfd profile doc-example-bfd-profile-full neighbor 10.0.2.4 timers 5 15 ! address-family ipv4 unicast network 203.0.113.200/30 4 neighbor 10.0.2.3 activate neighbor 10.0.2.3 route-map 10.0.2.3-in in neighbor 10.0.2.4 activate neighbor 10.0.2.4 route-map 10.0.2.4-in in exit-address-family ! address-family ipv6 unicast network fc00:f853:ccd:e799::/124 neighbor 10.0.2.3 activate neighbor 10.0.2.3 route-map 10.0.2.3-in in neighbor 10.0.2.4 activate neighbor 10.0.2.4 route-map 10.0.2.4-in in exit-address-family ! route-map 10.0.2.3-in deny 20 ! route-map 10.0.2.4-in deny 20 ! ip nht resolve-via-default ! ipv6 nht resolve-via-default ! line vty ! bfd profile doc-example-bfd-profile-full transmit-interval 35 receive-interval 35 passive-mode echo-mode echo-interval 35 minimum-ttl 10 ! ! end",
"oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bgp summary\"",
"IPv4 Unicast Summary: BGP router identifier 10.0.1.2, local AS number 64500 vrf-id 0 BGP table version 1 RIB entries 1, using 192 bytes of memory Peers 2, using 29 KiB of memory Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt 10.0.2.3 4 64500 387 389 0 0 0 00:32:02 0 1 1 10.0.2.4 4 64500 0 0 0 0 0 never Active 0 2 Total number of neighbors 2 IPv6 Unicast Summary: BGP router identifier 10.0.1.2, local AS number 64500 vrf-id 0 BGP table version 1 RIB entries 1, using 192 bytes of memory Peers 2, using 29 KiB of memory Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt 10.0.2.3 4 64500 387 389 0 0 0 00:32:02 NoNeg 10.0.2.4 4 64500 0 0 0 0 0 never Active 0 Total number of neighbors 2",
"oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bgp ipv4 unicast 203.0.113.200/30\"",
"BGP routing table entry for 203.0.113.200/30 Paths: (1 available, best #1, table default) Advertised to non peer-group peers: 10.0.2.3 1 Local 0.0.0.0 from 0.0.0.0 (10.0.1.2) Origin IGP, metric 0, weight 32768, valid, sourced, local, best (First path received) Last update: Mon Jan 10 19:49:07 2022",
"oc get -n metallb-system pods -l component=speaker",
"NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 26m speaker-gvfnf 4/4 Running 0 26m",
"oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bfd peers brief\"",
"Session count: 2 SessionId LocalAddress PeerAddress Status ========= ============ =========== ====== 3909139637 10.0.1.2 10.0.2.3 up <.>",
"pod_network_name_info{interface=\"net0\",namespace=\"namespacename\",network_name=\"nadnamespace/firstNAD\",pod=\"podname\"} 0",
"(container_network_receive_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name)"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/networking/index |
Chapter 15. Encrypting etcd data | Chapter 15. Encrypting etcd data 15.1. About etcd encryption By default, etcd data is not encrypted in OpenShift Container Platform. You can enable etcd encryption for your cluster to provide an additional layer of data security. For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties. When you enable etcd encryption, the following OpenShift API server and Kubernetes API server resources are encrypted: Secrets Config maps Routes OAuth access tokens OAuth authorize tokens When you enable etcd encryption, encryption keys are created. You must have these keys to restore from an etcd backup. Note Etcd encryption only encrypts values, not keys. Resource types, namespaces, and object names are unencrypted. If etcd encryption is enabled during a backup, the static_kuberesources_<datetimestamp>.tar.gz file contains the encryption keys for the etcd snapshot. For security reasons, store this file separately from the etcd snapshot. However, this file is required to restore a state of etcd from the respective etcd snapshot. 15.2. Supported encryption types The following encryption types are supported for encrypting etcd data in OpenShift Container Platform: AES-CBC Uses AES-CBC with PKCS#7 padding and a 32 byte key to perform the encryption. The encryption keys are rotated weekly. AES-GCM Uses AES-GCM with a random nonce and a 32 byte key to perform the encryption. The encryption keys are rotated weekly. 15.3. Enabling etcd encryption You can enable etcd encryption to encrypt sensitive resources in your cluster. Warning Do not back up etcd resources until the initial encryption process is completed. If the encryption process is not completed, the backup might be only partially encrypted. After you enable etcd encryption, several changes can occur: The etcd encryption might affect the memory consumption of a few resources. You might notice a transient affect on backup performance because the leader must serve the backup. A disk I/O can affect the node that receives the backup state. You can encrypt the etcd database in either AES-GCM or AES-CBC encryption. Note To migrate your etcd database from one encryption type to the other, you can modify the API server's spec.encryption.type field. Migration of the etcd data to the new encryption type occurs automatically. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Modify the APIServer object: USD oc edit apiserver Set the spec.encryption.type field to aesgcm or aescbc : spec: encryption: type: aesgcm 1 1 Set to aesgcm for AES-GCM encryption or aescbc for AES-CBC encryption. Save the file to apply the changes. The encryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of the etcd database. Verify that etcd encryption was successful. Review the Encrypted status condition for the OpenShift API server to verify that its resources were successfully encrypted: USD oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows EncryptionCompleted upon successful encryption: EncryptionCompleted All resources encrypted: routes.route.openshift.io If the output shows EncryptionInProgress , encryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the Kubernetes API server to verify that its resources were successfully encrypted: USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows EncryptionCompleted upon successful encryption: EncryptionCompleted All resources encrypted: secrets, configmaps If the output shows EncryptionInProgress , encryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the OpenShift OAuth API server to verify that its resources were successfully encrypted: USD oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows EncryptionCompleted upon successful encryption: EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io If the output shows EncryptionInProgress , encryption is still in progress. Wait a few minutes and try again. 15.4. Disabling etcd encryption You can disable encryption of etcd data in your cluster. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Modify the APIServer object: USD oc edit apiserver Set the encryption field type to identity : spec: encryption: type: identity 1 1 The identity type is the default value and means that no encryption is performed. Save the file to apply the changes. The decryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of your cluster. Verify that etcd decryption was successful. Review the Encrypted status condition for the OpenShift API server to verify that its resources were successfully decrypted: USD oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows DecryptionCompleted upon successful decryption: DecryptionCompleted Encryption mode set to identity and everything is decrypted If the output shows DecryptionInProgress , decryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the Kubernetes API server to verify that its resources were successfully decrypted: USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows DecryptionCompleted upon successful decryption: DecryptionCompleted Encryption mode set to identity and everything is decrypted If the output shows DecryptionInProgress , decryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the OpenShift OAuth API server to verify that its resources were successfully decrypted: USD oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows DecryptionCompleted upon successful decryption: DecryptionCompleted Encryption mode set to identity and everything is decrypted If the output shows DecryptionInProgress , decryption is still in progress. Wait a few minutes and try again. | [
"oc edit apiserver",
"spec: encryption: type: aesgcm 1",
"oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"EncryptionCompleted All resources encrypted: routes.route.openshift.io",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"EncryptionCompleted All resources encrypted: secrets, configmaps",
"oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io",
"oc edit apiserver",
"spec: encryption: type: identity 1",
"oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"DecryptionCompleted Encryption mode set to identity and everything is decrypted",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"DecryptionCompleted Encryption mode set to identity and everything is decrypted",
"oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"DecryptionCompleted Encryption mode set to identity and everything is decrypted"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/security_and_compliance/encrypting-etcd |
function::qsq_throughput | function::qsq_throughput Name function::qsq_throughput - Number of requests served per unit time Synopsis Arguments qname queue name scale scale variable to take account for interval fraction Description This function returns the average number or requests served per microsecond. | [
"qsq_throughput:long(qname:string,scale:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-qsq-throughput |
Chapter 4. Creating a workbench and selecting an IDE | Chapter 4. Creating a workbench and selecting an IDE A workbench is an isolated area where you can examine and work with ML models. You can also work with data and run programs, for example to prepare and clean data. While a workbench is not required if, for example, you only want to service an existing model, one is needed for most data science workflow tasks, such as writing code to process data or training a model. When you create a workbench, you specify an image (an IDE, packages, and other dependencies). Supported IDEs include JupyterLab, code-server, and RStudio (Technology Preview). The IDEs are based on a server-client architecture. Each IDE provides a server that runs in a container on the OpenShift cluster, while the user interface (the client) is displayed in your web browser. For example, the Jupyter notebook server runs in a container on the Red Hat OpenShift cluster. The client is the JupyterLab interface that opens in your web browser on your local computer. All of the commands that you enter in JupyterLab are executed by the notebook server. Similarly, other IDEs like code-server or RStudio Server provide a server that runs in a container on the OpenShift cluster, while the user interface is displayed in your web browser. This architecture allows you to interact through your local computer in a browser environment, while all processing occurs on the cluster. The cluster provides the benefits of larger available resources and security because the data being processed never leaves the cluster. In a workbench, you can also configure connections (to access external data for training models and to save models so that you can deploy them) and cluster storage (for persisting data). Workbenches within the same project can share models and data through object storage with the data science pipelines and model servers. For data science projects that require data retention, you can add container storage to the workbench you are creating. Within a project, you can create multiple workbenches. When to create a new workbench depends on considerations, such as the following: The workbench configuration (for example, CPU, RAM, or IDE). If you want to avoid editing the configuration of an existing workbench's configuration to accommodate a new task, you can create a new workbench instead. Separation of tasks or activities. For example, you might want to use one workbench for your Large Language Models (LLM) experimentation activities, another workbench dedicated to a demo, and another workbench for testing. 4.1. About workbench images A workbench image (sometimes referred to as a notebook image) is optimized with the tools and libraries that you need for model development. You can use the provided workbench images or an OpenShift AI administrator can create custom workbench images adapted to your needs. To provide a consistent, stable platform for your model development, many provided workbench images contain the same version of Python. Most workbench images available on OpenShift AI are pre-built and ready for you to use immediately after OpenShift AI is installed or upgraded. For information about Red Hat support of workbench images and packages, see Red Hat OpenShift AI: Supported Configurations . The following table lists the workbench images that are installed with Red Hat OpenShift AI by default. If the preinstalled packages that are provided in these images are not sufficient for your use case, you have the following options: Install additional libraries after launching a default image. This option is good if you want to add libraries on an ad hoc basis as you develop models. However, it can be challenging to manage the dependencies of installed libraries and your changes are not saved when the workbench restarts. Create a custom image that includes the additional libraries or packages. For more information, see Creating custom workbench images . Important Workbench images denoted with (Technology Preview) in this table are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using Technology Preview features in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Table 4.1. Default workbench images Image name Description CUDA If you are working with compute-intensive data science models that require GPU support, use the Compute Unified Device Architecture (CUDA) workbench image to gain access to the NVIDIA CUDA Toolkit. Using this toolkit, you can optimize your work by using GPU-accelerated libraries and optimization tools. Standard Data Science Use the Standard Data Science workbench image for models that do not require TensorFlow or PyTorch. This image contains commonly-used libraries to assist you in developing your machine learning models. TensorFlow TensorFlow is an open source platform for machine learning. With TensorFlow, you can build, train and deploy your machine learning models. TensorFlow contains advanced data visualization features, such as computational graph visualizations. It also allows you to easily monitor and track the progress of your models. PyTorch PyTorch is an open source machine learning library optimized for deep learning. If you are working with computer vision or natural language processing models, use the Pytorch workbench image. Minimal Python If you do not require advanced machine learning features, or additional resources for compute-intensive data science work, you can use the Minimal Python image to develop your models. TrustyAI Use the TrustyAI workbench image to leverage your data science work with model explainability, tracing, and accountability, and runtime monitoring. See the TrustyAI Explainability repository for some example Jupyter notebooks. code-server With the code-server workbench image, you can customize your workbench environment to meet your needs using a variety of extensions to add new languages, themes, debuggers, and connect to additional services. Enhance the efficiency of your data science work with syntax highlighting, auto-indentation, and bracket matching, as well as an automatic task runner for seamless automation. For more information, see code-server in GitHub . NOTE: Elyra-based pipelines are not available with the code-server workbench image. RStudio Server (Technology preview) Use the RStudio Server workbench image to access the RStudio IDE, an integrated development environment for R, a programming language for statistical computing and graphics. For more information, see the RStudio Server site . To use the RStudio Server workbench image, you must first build it by creating a secret and triggering the BuildConfig, and then enable it in the OpenShift AI UI by editing the rstudio-rhel9 image stream. For more information, see Building the RStudio Server workbench images . Important Disclaimer: Red Hat supports managing workbenches in OpenShift AI. However, Red Hat does not provide support for the RStudio software. RStudio Server is available through https://rstudio.org/ and is subject to RStudio licensing terms. Review the licensing terms before you use this sample workbench. CUDA - RStudio Server (Technology Preview) Use the CUDA - RStudio Server workbench image to access the RStudio IDE and NVIDIA CUDA Toolkit. RStudio is an integrated development environment for R, a programming language for statistical computing and graphics. With the NVIDIA CUDA toolkit, you can optimize your work using GPU-accelerated libraries and optimization tools. For more information, see the RStudio Server site . To use the CUDA - RStudio Server workbench image, you must first build it by creating a secret and triggering the BuildConfig, and then enable it in the OpenShift AI UI by editing the cuda-rstudio-rhel9 image stream. For more information, see Building the RStudio Server workbench images . Important Disclaimer: Red Hat supports managing workbenches in OpenShift AI. However, Red Hat does not provide support for the RStudio software. RStudio Server is available through https://rstudio.org/ and is subject to RStudio licensing terms. Review the licensing terms before you use this sample workbench. The CUDA - RStudio Server workbench image contains NVIDIA CUDA technology. CUDA licensing information is available at https://docs.nvidia.com/cuda/ . Review the licensing terms before you use this sample workbench. ROCm Use the ROCm notebook image to run AI and machine learning workloads on AMD GPUs in OpenShift AI. It includes ROCm libraries and tools optimized for high-performance GPU acceleration, supporting custom AI workflows and data processing tasks. Use this image integrating additional frameworks or dependencies tailored to your specific AI development needs. ROCm-PyTorch Use the ROCm-PyTorch notebook image to optimize PyTorch workloads on AMD GPUs in OpenShift AI. It includes ROCm-accelerated PyTorch libraries, enabling efficient deep learning training, inference, and experimentation. This image is designed for data scientists working with PyTorch-based workflows, offering integration with GPU scheduling. ROCm-TensorFlow Use the ROCm-TensorFlow notebook image to optimize TensorFlow workloads on AMD GPUs in OpenShift AI. It includes ROCm-accelerated TensorFlow libraries to support high-performance deep learning model training and inference. This image simplifies TensorFlow development on AMD GPUs and integrates with OpenShift AI for resource scaling and management. 4.2. Building the RStudio Server workbench images Important The RStudio Server and CUDA - RStudio Server workbench images are currently available in Red Hat OpenShift AI as Technology Preview features. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat OpenShift AI includes the following RStudio Server workbench images: RStudio Server workbench image With the RStudio Server workbench image, you can access the RStudio IDE, an integrated development environment for the R programming language. R is used for statistical computing and graphics to support data analysis and predictions. Important Disclaimer: Red Hat supports managing workbenches in OpenShift AI. However, Red Hat does not provide support for the RStudio software. RStudio Server is available through rstudio.org and is subject to their licensing terms. You should review their licensing terms before you use this sample workbench. CUDA - RStudio Server workbench image With the CUDA - RStudio Server workbench image, you can access the RStudio IDE and NVIDIA CUDA Toolkit. The RStudio IDE is an integrated development environment for the R programming language for statistical computing and graphics. With the NVIDIA CUDA toolkit, you can enhance your work by using GPU-accelerated libraries and optimization tools. Important Disclaimer: Red Hat supports managing workbenches in OpenShift AI. However, Red Hat does not provide support for the RStudio software. RStudio Server is available through rstudio.org and is subject to their licensing terms. You should review their licensing terms before you use this sample workbench. The CUDA - RStudio Server workbench image contains NVIDIA CUDA technology. CUDA licensing information is available in the CUDA Toolkit documentation . You should review their licensing terms before you use this sample workbench. To use the RStudio Server and CUDA - RStudio Server workbench images, you must first build them by creating a secret and triggering the BuildConfig , and then enable them in the OpenShift AI UI by editing the rstudio-rhel9 and cuda-rstudio-rhel9 image streams. Prerequisites Before starting the RStudio Server build process, you have at least 1 CPU and 2Gi memory available for rstudio-server-rhel9 , and 1.5 CPUs and 8Gi memory available for cuda-rstudio-server-rhel9 on your cluster. You are logged in to your OpenShift cluster. You have the cluster-admin role in OpenShift. You have an active Red Hat Enterprise Linux (RHEL) subscription. Procedure Create a secret with Subscription Manager credentials. These are usually your Red Hat Customer Portal username and password. Note: The secret must be named rhel-subscription-secret , and its USERNAME and PASSWORD keys must be in capital letters. Start the build: To start the lightweight RStudio Server build: To start the CUDA-enabled RStudio Server build, trigger the cuda-rhel9 BuildConfig: The cuda-rhel9 build is a prerequisite for cuda-rstudio-rhel9. The cuda-rstudio-rhel9 build starts automatically. Confirm that the build process has completed successfully using the following command. Successful builds appear as Complete . After the builds complete successfully, use the following commands to make the workbench images available in the OpenShift AI UI. To enable the RStudio Server workbench image: To enable the CUDA - RStudio Server workbench image: Verification You can see RStudio Server and CUDA - RStudio Server images on the Applications Enabled menu in the Red Hat OpenShift AI dashboard. You can see R Studio Server or CUDA - RStudio Server in the Data Science Projects Workbenches Create workbench Notebook image Image selection dropdown list. 4.3. Creating a workbench When you create a workbench, you specify an image (an IDE, packages, and other dependencies). You can also configure connections, cluster storage, and add container storage. Prerequisites You have logged in to Red Hat OpenShift AI. If you use OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You created a project. If you created a Simple Storage Service (S3) account outside of Red Hat OpenShift AI and you want to create connections to your existing S3 storage buckets, you have the following credential information for the storage buckets: Endpoint URL Access key Secret key Region Bucket name For more information, see Working with data in an S3-compatible object store . Procedure From the OpenShift AI dashboard, click Data Science Projects . The Data Science Projects page opens. Click the name of the project that you want to add the workbench to. A project details page opens. Click the Workbenches tab. Click Create workbench . The Create workbench page opens. In the Name field, enter a unique name for your workbench. Optional: If you want to change the default resource name for your workbench, click Edit resource name . The resource name is what your resource is labeled in OpenShift. Valid characters include lowercase letters, numbers, and hyphens (-). The resource name cannot exceed 30 characters, and it must start with a letter and end with a letter or number. Note: You cannot change the resource name after the workbench is created. You can edit only the display name and the description. Optional: In the Description field, enter a description for your workbench. In the Notebook image section, complete the fields to specify the workbench image to use with your workbench. From the Image selection list, select a workbench image that suits your use case. A workbench image includes an IDE and Python packages (reusable code). Optionally, click View package information to view a list of packages that are included in the image that you selected. If the workbench image has multiple versions available, select the workbench image version to use from the Version selection list. To use the latest package versions, Red Hat recommends that you use the most recently added image. Note You can change the workbench image after you create the workbench. In the Deployment size section, from the Container size list, select a container size for your server. The container size specifies the number of CPUs and the amount of memory allocated to the container, setting the guaranteed minimum (request) and maximum (limit) for both. Optional: In the Environment variables section, select and specify values for any environment variables. Setting environment variables during the workbench configuration helps you save time later because you do not need to define them in the body of your notebooks, or with the IDE command line interface. If you are using S3-compatible storage, add these recommended environment variables: AWS_ACCESS_KEY_ID specifies your Access Key ID for Amazon Web Services. AWS_SECRET_ACCESS_KEY specifies your Secret access key for the account specified in AWS_ACCESS_KEY_ID . OpenShift AI stores the credentials as Kubernetes secrets in a protected namespace if you select Secret when you add the variable. In the Cluster storage section, configure the storage for your workbench. Select one of the following options: Create new persistent storage to create storage that is retained after you shut down your workbench. Complete the relevant fields to define the storage: Enter a name for the cluster storage. Enter a description for the cluster storage. Select a storage class for the cluster storage. Note You cannot change the storage class after you add the cluster storage to the workbench. Under Persistent storage size , enter a new size in gibibytes or mebibytes. Use existing persistent storage to reuse existing storage and select the storage from the Persistent storage list. Optional: You can add a connection to your workbench. A connection is a resource that contains the configuration parameters needed to connect to a data source or sink, such as an object storage bucket. You can use storage buckets for storing data, models, and pipeline artifacts. You can also use a connection to specify the location of a model that you want to deploy. In the Connections section, use an existing connection or create a new connection: Use an existing connection as follows: Click Attach existing connections . From the Connection list, select a connection that you previously defined. Create a new connection as follows: Click Create connection . The Add connection dialog appears. From the Connection type drop-down list, select the type of connection. The Connection details section appears. If you selected S3 compatible object storage in the preceding step, configure the connection details: In the Connection name field, enter a unique name for the connection. Optional: In the Description field, enter a description for the connection. In the Access key field, enter the access key ID for the S3-compatible object storage provider. In the Secret key field, enter the secret access key for the S3-compatible object storage account that you specified. In the Endpoint field, enter the endpoint of your S3-compatible object storage bucket. In the Region field, enter the default region of your S3-compatible object storage account. In the Bucket field, enter the name of your S3-compatible object storage bucket. Click Create . If you selected URI in the preceding step, configure the connection details: In the Connection name field, enter a unique name for the connection. Optional: In the Description field, enter a description for the connection. In the URI field, enter the Uniform Resource Identifier (URI). Click Create . Click Create workbench . Verification The workbench that you created appears on the Workbenches tab for the project. Any cluster storage that you associated with the workbench during the creation process appears on the Cluster storage tab for the project. The Status column on the Workbenches tab displays a status of Starting when the workbench server is starting, and Running when the workbench has successfully started. Optional: Click the Open link to open the IDE in a new window. | [
"create secret generic rhel-subscription-secret --from-literal=USERNAME=<username> --from-literal=PASSWORD=<password> -n redhat-ods-applications",
"start-build rstudio-server-rhel9 -n redhat-ods-applications --follow",
"start-build cuda-rhel9 -n redhat-ods-applications --follow",
"get builds -n redhat-ods-applications",
"label -n redhat-ods-applications imagestream rstudio-rhel9 opendatahub.io/notebook-image='true'",
"label -n redhat-ods-applications imagestream cuda-rstudio-rhel9 opendatahub.io/notebook-image='true'"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/getting_started_with_red_hat_openshift_ai_cloud_service/creating-a-workbench-select-ide_get-started |
Automation mesh for managed cloud or operator environments | Automation mesh for managed cloud or operator environments Red Hat Ansible Automation Platform 2.5 Automate at scale in a cloud-native way Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/automation_mesh_for_managed_cloud_or_operator_environments/index |
4.3. GDB | 4.3. GDB Fundamentally, like most debuggers, GDB manages the execution of compiled code in a very closely controlled environment. This environment makes possible the following fundamental mechanisms necessary to the operation of GDB: Inspect and modify memory within the code being debugged (for example, reading and setting variables). Control the execution state of the code being debugged, principally whether it's running or stopped. Detect the execution of particular sections of code (for example, stop running code when it reaches a specified area of interest to the programmer). Detect access to particular areas of memory (for example, stop running code when it accesses a specified variable). Execute portions of code (from an otherwise stopped program) in a controlled manner. Detect various programmatic asynchronous events such as signals. The operation of these mechanisms rely mostly on information produced by a compiler. For example, to view the value of a variable, GDB has to know: The location of the variable in memory The nature of the variable This means that displaying a double-precision floating point value requires a very different process from displaying a string of characters. For something complex like a structure, GDB has to know not only the characteristics of each individual elements in the structure, but the morphology of the structure as well. GDB requires the following items in order to fully function: Debug Information Much of GDB's operations rely on a program's debug information . While this information generally comes from compilers, much of it is necessary only while debugging a program, that is, it is not used during the program's normal execution. For this reason, compilers do not always make that information available by default - GCC, for instance, must be explicitly instructed to provide this debugging information with the -g flag. To make full use of GDB's capabilities, it is highly advisable to make the debug information available first to GDB. GDB can only be of very limited use when run against code with no available debug information. Source Code One of the most useful features of GDB (or any other debugger) is the ability to associate events and circumstances in program execution with their corresponding location in source code. This location normally refers to a specific line or series of lines in a source file. This, of course, would require that a program's source code be available to GDB at debug time. 4.3.1. Simple GDB GDB literally contains dozens of commands. This section describes the most fundamental ones. br (breakpoint) The breakpoint command instructs GDB to halt execution upon reaching a specified point in the execution. That point can be specified a number of ways, but the most common are just as the line number in the source file, or the name of a function. Any number of breakpoints can be in effect simultaneously. This is frequently the first command issued after starting GDB. r (run) The run command starts the execution of the program. If run is executed with any arguments, those arguments are passed on to the executable as if the program has been started normally. Users normally issue this command after setting breakpoints. Before an executable is started, or once the executable stops at, for example, a breakpoint, the state of many aspects of the program can be inspected. The following commands are a few of the more common ways things can be examined. p (print) The print command displays the value of the argument given, and that argument can be almost anything relevant to the program. Usually, the argument is the name of a variable of any complexity, from a simple single value to a structure. An argument can also be an expression valid in the current language, including the use of program variables and library functions, or functions defined in the program being tested. bt (backtrace) The backtrace displays the chain of function calls used up until the execution was terminated. This is useful for investigating serious bugs (such as segmentation faults) with elusive causes. l (list) When execution is stopped, the list command shows the line in the source code corresponding to where the program stopped. The execution of a stopped program can be resumed in a number of ways. The following are the most common. c (continue) The continue command restarts the execution of the program, which will continue to execute until it encounters a breakpoint, runs into a specified or emergent condition (for example, an error), or terminates. n () Like continue , the command also restarts execution; however, in addition to the stopping conditions implicit in the continue command, will also halt execution at the sequential line of code in the current source file. s (step) Like , the step command also halts execution at each sequential line of code in the current source file. However, if execution is currently stopped at a source line containing a function call , GDB stops execution after entering the function call (rather than executing it). fini (finish) Like the aforementioned commands, the finish command resumes executions, but halts when execution returns from a function. Finally, two essential commands: q (quit) This terminates the execution. h (help) The help command provides access to its extensive internal documentation. The command takes arguments: help breakpoint (or h br ), for example, shows a detailed description of the breakpoint command. See the help output of each command for more detailed information. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/commandlinegdb |
Chapter 6. dwz | Chapter 6. dwz dwz is a command line tool that attempts to optimize DWARF debugging information contained in ELF shared libraries and ELF executables for size. To do so, dwz replaces DWARF information representation with equivalent smaller representation where possible and reduces the amount of duplication by using techniques from Appendix E of the DWARF Standard . Red Hat Developer Toolset is distributed with dwz 0.14 . 6.1. Installing dwz In Red Hat Developer Toolset, the dwz utility is provided by the devtoolset-12-dwz package and is automatically installed with devtoolset-12-toolchain as described in Section 1.5, "Installing Red Hat Developer Toolset" . 6.2. Using dwz To optimize DWARF debugging information in a binary file, run the dwz tool as follows: Note that you can execute any command using the scl utility, causing it to be run with the Red Hat Developer Toolset binaries used in preference to the Red Hat Enterprise Linux system equivalent. This allows you to run a shell session with Red Hat Developer Toolset dwz as default: Note To verify the version of dwz you are using at any point: Red Hat Developer Toolset's dwz executable path will begin with /opt . Alternatively, you can use the following command to confirm that the version number matches that for Red Hat Developer Toolset dwz : 6.3. Additional Resources For more information about dwz and its features, see the resources listed below. Installed Documentation dwz (1) - The manual page for the dwz utility provides detailed information on its usage. To display the manual page for the version included in Red Hat Developer Toolset: See Also Chapter 1, Red Hat Developer Toolset - An overview of Red Hat Developer Toolset and more information on how to install it on your system. Chapter 2, GNU Compiler Collection (GCC) - Instructions on compiling programs written in C, C++, and Fortran. Chapter 4, binutils - Instructions on using binutils , a collection of binary tools to inspect and manipulate object files and binaries. Chapter 5, elfutils - Instructions on using elfutils , a collection of binary tools to inspect and manipulate ELF files. | [
"scl enable devtoolset-12 'dwz option ... file_name '",
"scl enable devtoolset-12 'bash'",
"which dwz",
"dwz -v",
"scl enable devtoolset-12 'man dwz'"
] | https://docs.redhat.com/en/documentation/red_hat_developer_toolset/12/html/user_guide/chap-dwz |
Chapter 6. Managing unused rendered machine configs | Chapter 6. Managing unused rendered machine configs The Machine Config Operator (MCO) does not perform any garbage collection activities. This means that all rendered machine configs remain in the cluster. Each time a user or controller applies a new machine config, the MCO creates new rendered configs for each affected machine config pool. Over time, this can lead to a large number of rendered machine configs, which can make working with machine configs confusing. Having too many rendered machine configs can also contribute to disk space issues and performance issues with etcd. You can remove old, unused rendered machine configs by using the oc adm prune renderedmachineconfigs command with the --confirm flag. With this command, you can remove all unused rendered machine configs or only those in a specific machine config pool. You can also remove a specified number of unused rendered machine configs in order to keep some older machine configs, in case you want to check older configurations. You can use the oc adm prune renderedmachineconfigs command without the --confirm flag to see which rendered machine configs would be removed. Use the list subcommand to display all the rendered machine configs in the cluster or a specific machine config pool. Note The oc adm prune renderedmachineconfigs command deletes only rendered machine configs that are not in use. If a rendered machine configs are in use by a machine config pool, the rendered machine config is not deleted. In this case, the command output specifies the reason that the rendered machine config was not deleted. 6.1. Viewing rendered machine configs You can view a list of rendered machine configs by using the oc adm prune renderedmachineconfigs command with the list subcommand. For example, the command in the following procedure would list all rendered machine configs for the worker machine config pool. Procedure Optional: List the rendered machine configs by using the following command: USD oc adm prune renderedmachineconfigs list --in-use=false --pool-name=worker where: list Displays a list of rendered machine configs in your cluster. --in-use Optional: Specifies whether to display only the used machine configs or all machine configs from the specified pool. If true , the output lists the rendered machine configs that are being used by a machine config pool. If false , the output lists all rendered machine configs in the cluster. The default value is false . --pool-name Optional: Specifies the machine config pool from which to display the machine configs. Example output worker rendered-worker-f38bf61ced3c920cf5a29a200ed43243 -- 2025-01-21 13:45:01 +0000 UTC (Currently in use: false) rendered-worker-fc94397dc7c43808c7014683c208956e-- 2025-01-30 17:20:53 +0000 UTC (Currently in use: false) rendered-worker-708c652868f7597eaa1e2622edc366ef -- 2025-01-31 18:01:16 +0000 UTC (Currently in use: true) List the rendered machine configs that you can remove automatically by running the following command. Any rendered machine config marked with the as it's currently in use message in the command output cannot be removed. USD oc adm prune renderedmachineconfigs --pool-name=worker The command runs in dry-run mode, and no machine configs are removed. where: --pool-name Optional: Displays the machine configs in the specified machine config pool. Example output Dry run enabled - no modifications will be made. Add --confirm to remove rendered machine configs. dry-run deleting rendered MachineConfig rendered-worker-f38bf61ced3c920cf5a29a200ed43243 dry-run deleting MachineConfig rendered-worker-fc94397dc7c43808c7014683c208956e Skip dry-run deleting rendered MachineConfig rendered-worker-708c652868f7597eaa1e2622edc366ef as it's currently in use 6.2. Removing unused rendered machine configs You can remove unused rendered machine configs by using the oc adm prune renderedmachineconfigs command with the --confirm command. If any rendered machine config is not deleted, the command output indicates which was not deleted and lists the reason for skipping the deletion. Procedure Optional: List the rendered machine configs that you can remove automatically by running the following command. Any rendered machine config marked with the as it's currently in use message in the command output cannot be removed. USD oc adm prune renderedmachineconfigs --pool-name=worker Example output Dry run enabled - no modifications will be made. Add --confirm to remove rendered machine configs. dry-run deleting rendered MachineConfig rendered-worker-f38bf61ced3c920cf5a29a200ed43243 dry-run deleting MachineConfig rendered-worker-fc94397dc7c43808c7014683c208956e Skip dry-run deleting rendered MachineConfig rendered-worker-708c652868f7597eaa1e2622edc366ef as it's currently in use where: pool-name Optional: Specifies the machine config pool where you want to delete the machine configs from. Remove the unused rendered machine configs by running the following command. The command in the following procedure would delete the two oldest unused rendered machine configs in the worker machine config pool. USD oc adm prune renderedmachineconfigs --pool-name=worker --count=2 --confirm where: --count Optional: Specifies the maximum number of unused rendered machine configs you want to delete, starting with the oldest. --confirm Indicates that pruning should occur, instead of performing a dry-run. --pool-name Optional: Specifies the machine config pool from which you want to delete the machine. If not specified, all the pools are evaluated. Example output deleting rendered MachineConfig rendered-worker-f38bf61ced3c920cf5a29a200ed43243 deleting rendered MachineConfig rendered-worker-fc94397dc7c43808c7014683c208956e Skip deleting rendered MachineConfig rendered-worker-708c652868f7597eaa1e2622edc366ef as it's currently in use | [
"oc adm prune renderedmachineconfigs list --in-use=false --pool-name=worker",
"worker rendered-worker-f38bf61ced3c920cf5a29a200ed43243 -- 2025-01-21 13:45:01 +0000 UTC (Currently in use: false) rendered-worker-fc94397dc7c43808c7014683c208956e-- 2025-01-30 17:20:53 +0000 UTC (Currently in use: false) rendered-worker-708c652868f7597eaa1e2622edc366ef -- 2025-01-31 18:01:16 +0000 UTC (Currently in use: true)",
"oc adm prune renderedmachineconfigs --pool-name=worker",
"Dry run enabled - no modifications will be made. Add --confirm to remove rendered machine configs. dry-run deleting rendered MachineConfig rendered-worker-f38bf61ced3c920cf5a29a200ed43243 dry-run deleting MachineConfig rendered-worker-fc94397dc7c43808c7014683c208956e Skip dry-run deleting rendered MachineConfig rendered-worker-708c652868f7597eaa1e2622edc366ef as it's currently in use",
"oc adm prune renderedmachineconfigs --pool-name=worker",
"Dry run enabled - no modifications will be made. Add --confirm to remove rendered machine configs. dry-run deleting rendered MachineConfig rendered-worker-f38bf61ced3c920cf5a29a200ed43243 dry-run deleting MachineConfig rendered-worker-fc94397dc7c43808c7014683c208956e Skip dry-run deleting rendered MachineConfig rendered-worker-708c652868f7597eaa1e2622edc366ef as it's currently in use",
"oc adm prune renderedmachineconfigs --pool-name=worker --count=2 --confirm",
"deleting rendered MachineConfig rendered-worker-f38bf61ced3c920cf5a29a200ed43243 deleting rendered MachineConfig rendered-worker-fc94397dc7c43808c7014683c208956e Skip deleting rendered MachineConfig rendered-worker-708c652868f7597eaa1e2622edc366ef as it's currently in use"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/machine_configuration/machine-configs-garbage-collection |
Chapter 53. JPA | Chapter 53. JPA Since Camel 1.0 Both producer and consumer are supported. The JPA component enables you to store and retrieve Java objects from persistent storage using EJB 3's Java Persistence Architecture (JPA). Java Persistence Architecture (JPA) is a standard interface layer that wraps Object/Relational Mapping (ORM) products such as OpenJPA, Hibernate, TopLink. 53.1. Dependencies When using jpa with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jpa-starter</artifactId> </dependency> 53.2. Sending to the endpoint You can store a Java entity bean in a database by sending it to a JPA producer endpoint. The body of the In message is assumed to be an entity bean (that is, a POJO with an @Entity annotation on it) or a collection or array of entity beans. If the body is a List of entities, use entityType=java.util.List as a configuration passed to the producer endpoint. If the body does not contain one of the listed types, put a Message Translator before the endpoint to perform the necessary conversion first. You can use query , namedQuery or nativeQuery for the producer as well. For the value of the parameters , you can use the Simple expression which allows you to retrieve parameter values from Message body, header and etc. Those query can be used for retrieving a set of data with using SELECT JPQL/SQL statement as well as executing bulk update/delete with using UPDATE / DELETE JPQL/SQL statement. Please note that you need to specify useExecuteUpdate to true if you execute UPDATE / DELETE with namedQuery as camel doesn't look into the named query unlike query and nativeQuery . 53.3. Consuming from the endpoint Consuming messages from a JPA consumer endpoint removes (or updates) entity beans in the database. This allows you to use a database table as a logical queue: consumers take messages from the queue and then delete/update them to logically remove them from the queue. If you do not wish to delete the entity bean when it has been processed (and when routing is done), you can specify consumeDelete=false on the URI. This will result in the entity being processed each poll. If you would rather perform some update on the entity to mark it as processed (such as to exclude it from a future query) then you can annotate a method with @Consumed which will be invoked on your entity bean when the entity bean when it has been processed (and when routing is done). You can use @PreConsumed which will be invoked on your entity bean before it has been processed (before routing). If you are consuming a lot (100K+) of rows and experience OutOfMemory problems you should set the maximumResults to sensible value. 53.4. URI format For sending to the endpoint, the entityClassName is optional. If specified, it helps the Type Converter to ensure the body is of the correct type. For consuming, the entityClassName is mandatory. 53.5. Configuring Options Camel components are configured on two levels: Component level Endpoint level 53.5.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 53.5.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 53.6. Component Options The JPA component supports 9 options, which are listed below. Name Description Default Type aliases (common) Maps an alias to a JPA entity class. The alias can then be used in the endpoint URI (instead of the fully qualified class name). Map entityManagerFactory (common) To use the EntityManagerFactory. This is strongly recommended to configure. EntityManagerFactory joinTransaction (common) The camel-jpa component will join transaction by default. You can use this option to turn this off, for example if you use LOCAL_RESOURCE and join transaction doesn't work with your JPA provider. This option can also be set globally on the JpaComponent, instead of having to set it on all endpoints. true boolean sharedEntityManager (common) Whether to use Spring's SharedEntityManager for the consumer/producer. Note in most cases joinTransaction should be set to false as this is not an EXTENDED EntityManager. false boolean transactionManager (common) To use the PlatformTransactionManager for managing transactions. PlatformTransactionManager transactionStrategy (common) To use the TransactionStrategy for running the operations in a transaction. TransactionStrategy bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 53.6.1. Endpoint Options The JPA endpoint is configured using URI syntax: with the following path and query parameters: 53.6.1.1. Path Parameters (1 parameters) Name Description Default Type entityType (common) Required Entity class name. Class 53.6.1.2. Query Parameters (44 parameters) Name Description Default Type joinTransaction (common) The camel-jpa component will join transaction by default. You can use this option to turn this off, for example if you use LOCAL_RESOURCE and join transaction doesn't work with your JPA provider. This option can also be set globally on the JpaComponent, instead of having to set it on all endpoints. true boolean maximumResults (common) Set the maximum number of results to retrieve on the Query. -1 int namedQuery (common) To use a named query. String nativeQuery (common) To use a custom native query. You may want to use the option resultClass also when using native queries. String persistenceUnit (common) Required The JPA persistence unit used by default. camel String query (common) To use a custom query. String resultClass (common) Defines the type of the returned payload (we will call entityManager.createNativeQuery(nativeQuery, resultClass) instead of entityManager.createNativeQuery(nativeQuery)). Without this option, we will return an object array. Only has an affect when using in conjunction with native query when consuming data. Class consumeDelete (consumer) If true, the entity is deleted after it is consumed; if false, the entity is not deleted. true boolean consumeLockEntity (consumer) Specifies whether or not to set an exclusive lock on each entity bean while processing the results from polling. true boolean deleteHandler (consumer) To use a custom DeleteHandler to delete the row after the consumer is done processing the exchange. DeleteHandler lockModeType (consumer) To configure the lock mode on the consumer. Enum values: READ WRITE OPTIMISTIC OPTIMISTIC_FORCE_INCREMENT PESSIMISTIC_READ PESSIMISTIC_WRITE PESSIMISTIC_FORCE_INCREMENT NONE PESSIMISTIC_WRITE LockModeType maxMessagesPerPoll (consumer) An integer value to define the maximum number of messages to gather per poll. By default, no maximum is set. Can be used to avoid polling many thousands of messages when starting up the server. Set a value of 0 or negative to disable. int preDeleteHandler (consumer) To use a custom Pre-DeleteHandler to delete the row after the consumer has read the entity. DeleteHandler sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean skipLockedEntity (consumer) To configure whether to use NOWAIT on lock and silently skip the entity. false boolean transacted (consumer) Whether to run the consumer in transacted mode, by which all messages will either commit or rollback, when the entire batch has been processed. The default behavior (false) is to commit all the previously successfully processed messages, and only rollback the last failed message. false boolean bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern parameters (consumer (advanced)) This key/value mapping is used for building the query parameters. It is expected to be of the generic type java.util.Map where the keys are the named parameters of a given JPA query and the values are their corresponding effective values you want to select for. When it's used for producer, Simple expression can be used as a parameter value. It allows you to retrieve parameter values from the message body, header and etc. Map pollStrategy (consumer (advanced)) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPollStrategy findEntity (producer) If enabled then the producer will find a single entity by using the message body as key and entityType as the class type. This can be used instead of a query to find a single entity. false boolean flushOnSend (producer) Flushes the EntityManager after the entity bean has been persisted. true boolean remove (producer) Indicates to use entityManager.remove(entity). false boolean useExecuteUpdate (producer) To configure whether to use executeUpdate() when producer executes a query. When you use INSERT, UPDATE or DELETE statement as a named query, you need to specify this option to 'true'. Boolean usePersist (producer) Indicates to use entityManager.persist(entity) instead of entityManager.merge(entity). Note: entityManager.persist(entity) doesn't work for detached entities (where the EntityManager has to execute an UPDATE instead of an INSERT query)!. false boolean lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean usePassedInEntityManager (producer (advanced)) If set to true, then Camel will use the EntityManager from the header JpaConstants.ENTITY_MANAGER instead of the configured entity manager on the component/endpoint. This allows end users to control which entity manager will be in use. false boolean entityManagerProperties (advanced) Additional properties for the entity manager to use. Map sharedEntityManager (advanced) Whether to use Spring's SharedEntityManager for the consumer/producer. Note in most cases joinTransaction should be set to false as this is not an EXTENDED EntityManager. false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. 1000 long repeatCount (scheduler) Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. 0 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values: TRACE DEBUG INFO WARN ERROR OFF TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutorService scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. none Object schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. Enum values: NANOSECONDS MICROSECONDS MILLISECONDS SECONDS MINUTES HOURS DAYS MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 53.7. Message Headers The JPA component supports 2 message header(s), which are listed below: Name Description Default Type CamelEntityManager (common) Constant: ENTITY_MANAGER The JPA EntityManager object. EntityManager CamelJpaParameters (producer) Constant: link: JPA_PARAMETERS_HEADER Alternative way for passing query parameters as an Exchange header. Map 53.8. Configuring EntityManagerFactory It is recommended to configure the JPA component to use a specific EntityManagerFactory instance. If failed to do so each JpaEndpoint will auto create their own instance of EntityManagerFactory which most often is not what you want. For example, you can instantiate a JPA component that references the myEMFactory entity manager factory, as follows: <bean id="jpa" class="org.apache.camel.component.jpa.JpaComponent"> <property name="entityManagerFactory" ref="myEMFactory"/> </bean> The JpaComponent automatically looks up the EntityManagerFactory from the Registry which means you do not need to configure this on the JpaComponent as shown above. You only need to do so if there is ambiguity, in which case Camel will log a WARN. 53.9. Configuring TransactionManager The JpaComponent automatically looks up the TransactionManager from the Registry. If Camel won't find any TransactionManager instance registered, it will also look up for the TransactionTemplate and try to extract TransactionManager from it. If none TransactionTemplate is available in the registry, JpaEndpoint will auto create their own instance of TransactionManager which most often is not what you want. If more than single instance of the TransactionManager is found, Camel will log a WARN. In such cases you might want to instantiate and explicitly configure a JPA component that references the myTransactionManager transaction manager, as follows: <bean id="jpa" class="org.apache.camel.component.jpa.JpaComponent"> <property name="entityManagerFactory" ref="myEMFactory"/> <property name="transactionManager" ref="myTransactionManager"/> </bean> 53.10. Using a consumer with a named query For consuming only selected entities, you can use the namedQuery URI query option. First, you have to define the named query in the JPA Entity class: @Entity @NamedQuery(name = "step1", query = "select x from MultiSteps x where x.step = 1") public class MultiSteps { ... } After that you can define a consumer uri as shown below: from("jpa://org.apache.camel.examples.MultiSteps?namedQuery=step1") .to("bean:myBusinessLogic"); 53.11. Using a consumer with a query For consuming only selected entities, you can use the query URI query option. You only have to define the query option: from("jpa://org.apache.camel.examples.MultiSteps?query=select o from org.apache.camel.examples.MultiSteps o where o.step = 1") .to("bean:myBusinessLogic"); 53.12. Using a consumer with a native query For consuming only selected entities, you can use the nativeQuery URI query option. You only have to define the native query option: from("jpa://org.apache.camel.examples.MultiSteps?nativeQuery=select * from MultiSteps where step = 1") .to("bean:myBusinessLogic"); If you use the native query option, you will receive an object array in the message body. 53.13. Using a producer with a named query For retrieving selected entities or execute bulk update/delete, you can use the namedQuery URI query option. First, you have to define the named query in the JPA Entity class: @Entity @NamedQuery(name = "step1", query = "select x from MultiSteps x where x.step = 1") public class MultiSteps { ... } After that you can define a producer uri as shown below: from("direct:namedQuery") .to("jpa://org.apache.camel.examples.MultiSteps?namedQuery=step1"); Note that you need to specify useExecuteUpdate option to true to execute UPDATE / DELETE statement as a named query. 53.14. Using a producer with a query For retrieving selected entities or execute bulk update/delete, you can use the query URI query option. You only have to define the query option: from("direct:query") .to("jpa://org.apache.camel.examples.MultiSteps?query=select o from org.apache.camel.examples.MultiSteps o where o.step = 1"); 53.15. Using a producer with a native query For retrieving selected entities or execute bulk update/delete, you can use the nativeQuery URI query option. You only have to define the native query option: from("direct:nativeQuery") .to("jpa://org.apache.camel.examples.MultiSteps?resultClass=org.apache.camel.examples.MultiSteps&nativeQuery=select * from MultiSteps where step = 1"); If you use the native query option without specifying resultClass , you will receive an object array in the message body. 53.16. Using the JPA-Based Idempotent Repository The Idempotent Consumer from the EIP patterns is used to filter out duplicate messages. A JPA-based idempotent repository is provided. To use the JPA based idempotent repository. Procedure Set up a persistence-unit in the persistence.xml file. Set up a org.springframework.orm.jpa.JpaTemplate which is used by the org.apache.camel.processor.idempotent.jpa.JpaMessageIdRepository . Configure the error formatting macro: snippet: java.lang.IndexOutOfBoundsException: Index: 20, Size: 20 Configure the idempotent repository as org.apache.camel.processor.idempotent.jpa.JpaMessageIdRepository . Create the JPA idempotent repository in the Spring XML file as shown below: <camelContext xmlns="http://camel.apache.org/schema/spring"> <route id="JpaMessageIdRepositoryTest"> <from uri="direct:start" /> <idempotentConsumer idempotentRepository="jpaStore"> <header>messageId</header> <to uri="mock:result" /> </idempotentConsumer> </route> </camelContext> When running this Camel component tests inside your IDE If you run the tests of this component directly inside your IDE, and not through Maven, then you could see exceptions like these: The problem here is that the source has been compiled or recompiled through your IDE and not through Maven, which would enhance the byte-code at build time . To overcome this you need to enable dynamic byte-code enhancement of OpenJPA . For example, assuming the current OpenJPA version being used in Camel is 2.2.1, to run the tests inside your IDE you would need to pass the following argument to the JVM: 53.17. Spring Boot Auto-Configuration The component supports 10 options, which are listed below. Name Description Default Type camel.component.jpa.aliases Maps an alias to a JPA entity class. The alias can then be used in the endpoint URI (instead of the fully qualified class name). Map camel.component.jpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.jpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.jpa.enabled Whether to enable auto configuration of the jpa component. This is enabled by default. Boolean camel.component.jpa.entity-manager-factory To use the EntityManagerFactory. This is strongly recommended to configure. The option is a javax.persistence.EntityManagerFactory type. EntityManagerFactory camel.component.jpa.join-transaction The camel-jpa component will join transaction by default. You can use this option to turn this off, for example if you use LOCAL_RESOURCE and join transaction doesn't work with your JPA provider. This option can also be set globally on the JpaComponent, instead of having to set it on all endpoints. true Boolean camel.component.jpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.jpa.shared-entity-manager Whether to use Spring's SharedEntityManager for the consumer/producer. Note in most cases joinTransaction should be set to false as this is not an EXTENDED EntityManager. false Boolean camel.component.jpa.transaction-manager To use the PlatformTransactionManager for managing transactions. The option is a org.springframework.transaction.PlatformTransactionManager type. PlatformTransactionManager camel.component.jpa.transaction-strategy To use the TransactionStrategy for running the operations in a transaction. The option is a org.apache.camel.component.jpa.TransactionStrategy type. TransactionStrategy | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jpa-starter</artifactId> </dependency>",
"jpa:entityClassName[?options]",
"jpa:entityType",
"<bean id=\"jpa\" class=\"org.apache.camel.component.jpa.JpaComponent\"> <property name=\"entityManagerFactory\" ref=\"myEMFactory\"/> </bean>",
"<bean id=\"jpa\" class=\"org.apache.camel.component.jpa.JpaComponent\"> <property name=\"entityManagerFactory\" ref=\"myEMFactory\"/> <property name=\"transactionManager\" ref=\"myTransactionManager\"/> </bean>",
"@Entity @NamedQuery(name = \"step1\", query = \"select x from MultiSteps x where x.step = 1\") public class MultiSteps { }",
"from(\"jpa://org.apache.camel.examples.MultiSteps?namedQuery=step1\") .to(\"bean:myBusinessLogic\");",
"from(\"jpa://org.apache.camel.examples.MultiSteps?query=select o from org.apache.camel.examples.MultiSteps o where o.step = 1\") .to(\"bean:myBusinessLogic\");",
"from(\"jpa://org.apache.camel.examples.MultiSteps?nativeQuery=select * from MultiSteps where step = 1\") .to(\"bean:myBusinessLogic\");",
"@Entity @NamedQuery(name = \"step1\", query = \"select x from MultiSteps x where x.step = 1\") public class MultiSteps { }",
"from(\"direct:namedQuery\") .to(\"jpa://org.apache.camel.examples.MultiSteps?namedQuery=step1\");",
"from(\"direct:query\") .to(\"jpa://org.apache.camel.examples.MultiSteps?query=select o from org.apache.camel.examples.MultiSteps o where o.step = 1\");",
"from(\"direct:nativeQuery\") .to(\"jpa://org.apache.camel.examples.MultiSteps?resultClass=org.apache.camel.examples.MultiSteps&nativeQuery=select * from MultiSteps where step = 1\");",
"<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route id=\"JpaMessageIdRepositoryTest\"> <from uri=\"direct:start\" /> <idempotentConsumer idempotentRepository=\"jpaStore\"> <header>messageId</header> <to uri=\"mock:result\" /> </idempotentConsumer> </route> </camelContext>",
"org.springframework.transaction.CannotCreateTransactionException: Could not open JPA EntityManager for transaction; nested exception is <openjpa-2.2.1-r422266:1396819 nonfatal user error> org.apache.openjpa.persistence.ArgumentException: This configuration disallows runtime optimization, but the following listed types were not enhanced at build time or at class load time with a javaagent: \"org.apache.camel.examples.SendEmail\". at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:427) at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:371) at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:127) at org.apache.camel.processor.jpa.JpaRouteTest.cleanupRepository(JpaRouteTest.java:96) at org.apache.camel.processor.jpa.JpaRouteTest.createCamelContext(JpaRouteTest.java:67) at org.apache.camel.test.junit5.CamelTestSupport.doSetUp(CamelTestSupport.java:238) at org.apache.camel.test.junit5.CamelTestSupport.setUp(CamelTestSupport.java:208)",
"-javaagent:<path_to_your_local_m2_cache>/org/apache/openjpa/openjpa/2.2.1/openjpa-2.2.1.jar"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-jpa-component-starter |
3.4. Basic Networking Terms | 3.4. Basic Networking Terms Red Hat Virtualization provides networking functionality between virtual machines, virtualization hosts, and wider networks using: Logical networks A network interface controller (NIC) A Linux bridge A Bond A virtual network interface controller (vNIC) A virtual LAN (VLAN) NICs, Linux bridges, and vNICs enable network communication between hosts, virtual machines, local area networks, and the Internet. Bonds and VLANs are optionally implemented to enhance security, fault tolerance, and network capacity. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/introduction_basic_networking_terms |
3.6. Virtual Networking | 3.6. Virtual Networking A virtual guest's connection to any network uses the software network components of the physical host. These software components can be rearranged and reconfigured by using libvirt 's virtual network configuration. The host therefore acts as a virtual network switch , which can be configured in a number of different ways to fit the guest's networking needs. By default, all guests on a single host are connected to the same libvirt virtual network, named default . Guests on this network can make the following connections: With each other and with the virtualization host Both inbound and outbound traffic is possible, but is affected by the firewalls in the guest operating system's network stack and by libvirt network filtering rules attached to the guest interface. With other hosts on the network beyond the virtualization host Only outbound traffic is possible, and is affected by Network Address Translation (NAT) rules, as well as the host system's firewall. However, if needed, guest interfaces can instead be set to one of the following modes: Isolated mode The guests are connected to a network that does not allow any traffic beyond the virtualization host. Routed mode The guests are connected to a network that routes traffic between the guest and external hosts without performing any NAT. This enables incoming connections but requires extra routing-table entries for systems on the external network. Bridged mode The guests are connected to a bridge device that is also connected directly to a physical ethernet device connected to the local ethernet. This makes the guest directly visible on the physical network, and thus enables incoming connections, but does not require any extra routing-table entries. For basic outbound-only network access from virtual machines, no additional network setup is usually needed, as the default network is installed along with the libvirt package, and automatically started when the libvirtd service is started. If more advanced functionality is needed, additional networks can be created and configured using either virsh or virt-manager , and the guest XML configuration file can be edited to use one of these new networks. Note For information on advanced virtual network settings, see the Red Hat Enterprise Linux 6 Virtualization Administration Guide . From the point of view of the guest operating system, a virtual network connection is the same as a normal physical network connection. For further information on configuring networks in Red Hat Enterprise Linux 6 guests, see the Red Hat Enterprise Linux 6 Deployment Guide . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_getting_started_guide/sec-virtualization_getting_started-products-virtual_networking |
Chapter 3. Top Performance Considerations | Chapter 3. Top Performance Considerations You can improve the performance and scalability of Red Hat Satellite: Configure httpd Configure Puma to increase concurrency Configure Candlepin Configure Pulp Configure Foreman's performance and scalability Configure Dynflow Deploy external Capsules instead of relying on internal Capsules Configure katello-agent for scalability Configure Hammer to reduce API timeouts Configure qpid and qdrouterd Improve PostgreSQL to handle more concurrent loads Configure the storage for DB workloads Ensure the storage requirements for Content Views are met Ensure the system requirements are met Improve the environment for remote execution | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/performance_tuning_guide/top_performance_considerations_performance-tuning |
Chapter 41. Capturing network packets | Chapter 41. Capturing network packets To debug network issues and communications, you can capture network packets. The following sections provide instructions and additional information about capturing network packets. 41.1. Using xdpdump to capture network packets including packets dropped by XDP programs The xdpdump utility captures network packets. Unlike the tcpdump utility, xdpdump uses an extended Berkeley Packet Filter(eBPF) program for this task. This enables xdpdump to also capture packets dropped by Express Data Path (XDP) programs. User-space utilities, such as tcpdump , are not able to capture these dropped packages, as well as original packets modified by an XDP program. You can use xdpdump to debug XDP programs that are already attached to an interface. Therefore, the utility can capture packets before an XDP program is started and after it has finished. In the latter case, xdpdump also captures the XDP action. By default, xdpdump captures incoming packets at the entry of the XDP program. Important On other architectures than AMD and Intel 64-bit, the xdpdump utility is provided as a Technology Preview only. Technology Preview features are not supported with Red Hat production Service Level Agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These previews provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. Note that xdpdump has no packet filter or decode capabilities. However, you can use it in combination with tcpdump for packet decoding. Prerequisites A network driver that supports XDP programs. An XDP program is loaded to the enp1s0 interface. If no program is loaded, xdpdump captures packets in a similar way tcpdump does, for backward compatibility. Procedure To capture packets on the enp1s0 interface and write them to the /root/capture.pcap file, enter: To stop capturing packets, press Ctrl + C . Additional resources xdpdump(8) man page on your system If you are a developer and you are interested in the source code of xdpdump , download and install the corresponding source RPM (SRPM) from the Red Hat Customer Portal. 41.2. Additional resources How to capture network packets with tcpdump? (Red Hat Knowledgebase) | [
"xdpdump -i enp1s0 -w /root/capture.pcap"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/capturing-network-packets_configuring-and-managing-networking |
1.8. Plan Ahead | 1.8. Plan Ahead System administrators that took all this advice to heart and did their best to follow it would be fantastic system administrators -- for a day. Eventually, the environment will change, and one day our fantastic administrator would be caught flat-footed. The reason? Our fantastic administrator failed to plan ahead. Certainly no one can predict the future with 100% accuracy. However, with a bit of awareness it is easy to read the signs of many changes: An offhand mention of a new project gearing up during that boring weekly staff meeting is a sure sign that you will likely need to support new users in the near future Talk of an impending acquisition means that you may end up being responsible for new (and possibly incompatible) systems in one or more remote locations Being able to read these signs (and to respond effectively to them) makes life easier for you and your users. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s1-philosophy-planning |
Chapter 4. Installing managed clusters with RHACM and SiteConfig resources | Chapter 4. Installing managed clusters with RHACM and SiteConfig resources You can provision OpenShift Container Platform clusters at scale with Red Hat Advanced Cluster Management (RHACM) using the assisted service and the GitOps plugin policy generator with core-reduction technology enabled. The GitOps Zero Touch Provisioning (ZTP) pipeline performs the cluster installations. GitOps ZTP can be used in a disconnected environment. 4.1. GitOps ZTP and Topology Aware Lifecycle Manager GitOps Zero Touch Provisioning (ZTP) generates installation and configuration CRs from manifests stored in Git. These artifacts are applied to a centralized hub cluster where Red Hat Advanced Cluster Management (RHACM), the assisted service, and the Topology Aware Lifecycle Manager (TALM) use the CRs to install and configure the managed cluster. The configuration phase of the GitOps ZTP pipeline uses the TALM to orchestrate the application of the configuration CRs to the cluster. There are several key integration points between GitOps ZTP and the TALM. Inform policies By default, GitOps ZTP creates all policies with a remediation action of inform . These policies cause RHACM to report on compliance status of clusters relevant to the policies but does not apply the desired configuration. During the GitOps ZTP process, after OpenShift installation, the TALM steps through the created inform policies and enforces them on the target managed cluster(s). This applies the configuration to the managed cluster. Outside of the GitOps ZTP phase of the cluster lifecycle, this allows you to change policies without the risk of immediately rolling those changes out to affected managed clusters. You can control the timing and the set of remediated clusters by using TALM. Automatic creation of ClusterGroupUpgrade CRs To automate the initial configuration of newly deployed clusters, TALM monitors the state of all ManagedCluster CRs on the hub cluster. Any ManagedCluster CR that does not have a ztp-done label applied, including newly created ManagedCluster CRs, causes the TALM to automatically create a ClusterGroupUpgrade CR with the following characteristics: The ClusterGroupUpgrade CR is created and enabled in the ztp-install namespace. ClusterGroupUpgrade CR has the same name as the ManagedCluster CR. The cluster selector includes only the cluster associated with that ManagedCluster CR. The set of managed policies includes all policies that RHACM has bound to the cluster at the time the ClusterGroupUpgrade is created. Pre-caching is disabled. Timeout set to 4 hours (240 minutes). The automatic creation of an enabled ClusterGroupUpgrade ensures that initial zero-touch deployment of clusters proceeds without the need for user intervention. Additionally, the automatic creation of a ClusterGroupUpgrade CR for any ManagedCluster without the ztp-done label allows a failed GitOps ZTP installation to be restarted by simply deleting the ClusterGroupUpgrade CR for the cluster. Waves Each policy generated from a PolicyGenTemplate CR includes a ztp-deploy-wave annotation. This annotation is based on the same annotation from each CR which is included in that policy. The wave annotation is used to order the policies in the auto-generated ClusterGroupUpgrade CR. The wave annotation is not used other than for the auto-generated ClusterGroupUpgrade CR. Note All CRs in the same policy must have the same setting for the ztp-deploy-wave annotation. The default value of this annotation for each CR can be overridden in the PolicyGenTemplate . The wave annotation in the source CR is used for determining and setting the policy wave annotation. This annotation is removed from each built CR which is included in the generated policy at runtime. The TALM applies the configuration policies in the order specified by the wave annotations. The TALM waits for each policy to be compliant before moving to the policy. It is important to ensure that the wave annotation for each CR takes into account any prerequisites for those CRs to be applied to the cluster. For example, an Operator must be installed before or concurrently with the configuration for the Operator. Similarly, the CatalogSource for an Operator must be installed in a wave before or concurrently with the Operator Subscription. The default wave value for each CR takes these prerequisites into account. Multiple CRs and policies can share the same wave number. Having fewer policies can result in faster deployments and lower CPU usage. It is a best practice to group many CRs into relatively few waves. To check the default wave value in each source CR, run the following command against the out/source-crs directory that is extracted from the ztp-site-generate container image: USD grep -r "ztp-deploy-wave" out/source-crs Phase labels The ClusterGroupUpgrade CR is automatically created and includes directives to annotate the ManagedCluster CR with labels at the start and end of the GitOps ZTP process. When GitOps ZTP configuration postinstallation commences, the ManagedCluster has the ztp-running label applied. When all policies are remediated to the cluster and are fully compliant, these directives cause the TALM to remove the ztp-running label and apply the ztp-done label. For deployments that make use of the informDuValidator policy, the ztp-done label is applied when the cluster is fully ready for deployment of applications. This includes all reconciliation and resulting effects of the GitOps ZTP applied configuration CRs. The ztp-done label affects automatic ClusterGroupUpgrade CR creation by TALM. Do not manipulate this label after the initial GitOps ZTP installation of the cluster. Linked CRs The automatically created ClusterGroupUpgrade CR has the owner reference set as the ManagedCluster from which it was derived. This reference ensures that deleting the ManagedCluster CR causes the instance of the ClusterGroupUpgrade to be deleted along with any supporting resources. 4.2. Overview of deploying managed clusters with GitOps ZTP Red Hat Advanced Cluster Management (RHACM) uses GitOps Zero Touch Provisioning (ZTP) to deploy single-node OpenShift Container Platform clusters, three-node clusters, and standard clusters. You manage site configuration data as OpenShift Container Platform custom resources (CRs) in a Git repository. GitOps ZTP uses a declarative GitOps approach for a develop once, deploy anywhere model to deploy the managed clusters. The deployment of the clusters includes: Installing the host operating system (RHCOS) on a blank server Deploying OpenShift Container Platform Creating cluster policies and site subscriptions Making the necessary network configurations to the server operating system Deploying profile Operators and performing any needed software-related configuration, such as performance profile, PTP, and SR-IOV Overview of the managed site installation process After you apply the managed site custom resources (CRs) on the hub cluster, the following actions happen automatically: A Discovery image ISO file is generated and booted on the target host. When the ISO file successfully boots on the target host it reports the host hardware information to RHACM. After all hosts are discovered, OpenShift Container Platform is installed. When OpenShift Container Platform finishes installing, the hub installs the klusterlet service on the target cluster. The requested add-on services are installed on the target cluster. The Discovery image ISO process is complete when the Agent CR for the managed cluster is created on the hub cluster. Important The target bare-metal host must meet the networking, firmware, and hardware requirements listed in Recommended single-node OpenShift cluster configuration for vDU application workloads . 4.3. Creating the managed bare-metal host secrets Add the required Secret custom resources (CRs) for the managed bare-metal host to the hub cluster. You need a secret for the GitOps Zero Touch Provisioning (ZTP) pipeline to access the Baseboard Management Controller (BMC) and a secret for the assisted installer service to pull cluster installation images from the registry. Note The secrets are referenced from the SiteConfig CR by name. The namespace must match the SiteConfig namespace. Procedure Create a YAML secret file containing credentials for the host Baseboard Management Controller (BMC) and a pull secret required for installing OpenShift and all add-on cluster Operators: Save the following YAML as the file example-sno-secret.yaml : apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson 1 Must match the namespace configured in the related SiteConfig CR 2 Base64-encoded values for password and username 3 Must match the namespace configured in the related SiteConfig CR 4 Base64-encoded pull secret Add the relative path to example-sno-secret.yaml to the kustomization.yaml file that you use to install the cluster. 4.4. Configuring Discovery ISO kernel arguments for installations using GitOps ZTP The GitOps Zero Touch Provisioning (ZTP) workflow uses the Discovery ISO as part of the OpenShift Container Platform installation process on managed bare-metal hosts. You can edit the InfraEnv resource to specify kernel arguments for the Discovery ISO. This is useful for cluster installations with specific environmental requirements. For example, configure the rd.net.timeout.carrier kernel argument for the Discovery ISO to facilitate static networking for the cluster or to receive a DHCP address before downloading the root file system during installation. Note In OpenShift Container Platform 4.15, you can only add kernel arguments. You can not replace or delete kernel arguments. Prerequisites You have installed the OpenShift CLI (oc). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Create the InfraEnv CR and edit the spec.kernelArguments specification to configure kernel arguments. Save the following YAML in an InfraEnv-example.yaml file: Note The InfraEnv CR in this example uses template syntax such as {{ .Cluster.ClusterName }} that is populated based on values in the SiteConfig CR. The SiteConfig CR automatically populates values for these templates during deployment. Do not edit the templates manually. apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: annotations: argocd.argoproj.io/sync-wave: "1" name: "{{ .Cluster.ClusterName }}" namespace: "{{ .Cluster.ClusterName }}" spec: clusterRef: name: "{{ .Cluster.ClusterName }}" namespace: "{{ .Cluster.ClusterName }}" kernelArguments: - operation: append 1 value: audit=0 2 - operation: append value: trace=1 sshAuthorizedKey: "{{ .Site.SshPublicKey }}" proxy: "{{ .Cluster.ProxySettings }}" pullSecretRef: name: "{{ .Site.PullSecretRef.Name }}" ignitionConfigOverride: "{{ .Cluster.IgnitionConfigOverride }}" nmStateConfigLabelSelector: matchLabels: nmstate-label: "{{ .Cluster.ClusterName }}" additionalNTPSources: "{{ .Cluster.AdditionalNTPSources }}" 1 Specify the append operation to add a kernel argument. 2 Specify the kernel argument you want to configure. This example configures the audit kernel argument and the trace kernel argument. Commit the InfraEnv-example.yaml CR to the same location in your Git repository that has the SiteConfig CR and push your changes. The following example shows a sample Git repository structure: ~/example-ztp/install └── site-install ├── siteconfig-example.yaml ├── InfraEnv-example.yaml ... Edit the spec.clusters.crTemplates specification in the SiteConfig CR to reference the InfraEnv-example.yaml CR in your Git repository: clusters: crTemplates: InfraEnv: "InfraEnv-example.yaml" When you are ready to deploy your cluster by committing and pushing the SiteConfig CR, the build pipeline uses the custom InfraEnv-example CR in your Git repository to configure the infrastructure environment, including the custom kernel arguments. Verification To verify that the kernel arguments are applied, after the Discovery image verifies that OpenShift Container Platform is ready for installation, you can SSH to the target host before the installation process begins. At that point, you can view the kernel arguments for the Discovery ISO in the /proc/cmdline file. Begin an SSH session with the target host: USD ssh -i /path/to/privatekey core@<host_name> View the system's kernel arguments by using the following command: USD cat /proc/cmdline 4.5. Deploying a managed cluster with SiteConfig and GitOps ZTP Use the following procedure to create a SiteConfig custom resource (CR) and related files and initiate the GitOps Zero Touch Provisioning (ZTP) cluster deployment. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You configured the hub cluster for generating the required installation and policy CRs. You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and you must configure it as a source repository for the ArgoCD application. See "Preparing the GitOps ZTP site configuration repository" for more information. Note When you create the source repository, ensure that you patch the ArgoCD application with the argocd/deployment/argocd-openshift-gitops-patch.json patch-file that you extract from the ztp-site-generate container. See "Configuring the hub cluster with ArgoCD". To be ready for provisioning managed clusters, you require the following for each bare-metal host: Network connectivity Your network requires DNS. Managed cluster hosts should be reachable from the hub cluster. Ensure that Layer 3 connectivity exists between the hub cluster and the managed cluster host. Baseboard Management Controller (BMC) details GitOps ZTP uses BMC username and password details to connect to the BMC during cluster installation. The GitOps ZTP plugin manages the ManagedCluster CRs on the hub cluster based on the SiteConfig CR in your site Git repo. You create individual BMCSecret CRs for each host manually. Procedure Create the required managed cluster secrets on the hub cluster. These resources must be in a namespace with a name matching the cluster name. For example, in out/argocd/example/siteconfig/example-sno.yaml , the cluster name and namespace is example-sno . Export the cluster namespace by running the following command: USD export CLUSTERNS=example-sno Create the namespace: USD oc create namespace USDCLUSTERNS Create pull secret and BMC Secret CRs for the managed cluster. The pull secret must contain all the credentials necessary for installing OpenShift Container Platform and all required Operators. See "Creating the managed bare-metal host secrets" for more information. Note The secrets are referenced from the SiteConfig custom resource (CR) by name. The namespace must match the SiteConfig namespace. Create a SiteConfig CR for your cluster in your local clone of the Git repository: Choose the appropriate example for your CR from the out/argocd/example/siteconfig/ folder. The folder includes example files for single node, three-node, and standard clusters: example-sno.yaml example-3node.yaml example-standard.yaml Change the cluster and host details in the example file to match the type of cluster you want. For example: Example single-node OpenShift SiteConfig CR # example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "example-sno" namespace: "example-sno" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.10" sshPublicKey: "ssh-rsa AAAA..." clusters: - clusterName: "example-sno" networkType: "OVNKubernetes" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all but the marketplace component from the optional set of # components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier installConfigOverrides: | { "capabilities": { "baselineCapabilitySet": "None", "additionalEnabledCapabilities": [ "NodeTuning", "OperatorLifecycleManager" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: "latest" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: ""' group-du-sno: "" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: "example-sno"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites : "example-sno" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: #crTemplates: # KlusterletAddonConfig: "KlusterletAddonConfigOverride.yaml" nodes: - hostName: "example-node1.example.com" role: "master" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: "example-hw.profile" bmcAddress: "idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "example-node1-bmh-secret" bootMACAddress: "AA:BB:CC:DD:EE:11" # Use UEFISecureBoot to enable secure boot bootMode: "UEFI" rootDeviceHints: deviceName: "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { "ignition": { "version": "3.2.0" }, "storage": { "disks": [ { "device": "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0", "partitions": [ { "label": "var-lib-containers", "sizeMiB": 0, "startMiB": 250000 } ], "wipeTable": false } ], "filesystems": [ { "device": "/dev/disk/by-partlabel/var-lib-containers", "format": "xfs", "mountOptions": [ "defaults", "prjquota" ], "path": "/var/lib/containers", "wipeFilesystem": true } ] }, "systemd": { "units": [ { "contents": "# Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target", "enabled": true, "name": "var-lib-containers.mount" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: "AA:BB:CC:DD:EE:11" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 -hop-interface: eno1 -hop-address: 1111:2222:3333:4444::1 table-id: 254 Note For more information about BMC addressing, see the "Additional resources" section. The installConfigOverrides and ignitionConfigOverride fields are expanded in the example for ease of readability. You can inspect the default set of extra-manifest MachineConfig CRs in out/argocd/extra-manifest . It is automatically applied to the cluster when it is installed. Optional: To provision additional install-time manifests on the provisioned cluster, create a directory in your Git repository, for example, sno-extra-manifest/ , and add your custom manifest CRs to this directory. If your SiteConfig.yaml refers to this directory in the extraManifestPath field, any CRs in this referenced directory are appended to the default set of extra manifests. Enabling the crun OCI container runtime For optimal cluster performance, enable crun for master and worker nodes in single-node OpenShift, single-node OpenShift with additional worker nodes, three-node OpenShift, and standard clusters. Enable crun in a ContainerRuntimeConfig CR as an additional Day 0 install-time manifest to avoid the cluster having to reboot. The enable-crun-master.yaml and enable-crun-worker.yaml CR files are in the out/source-crs/optional-extra-manifest/ folder that you can extract from the ztp-site-generate container. For more information, see "Customizing extra installation manifests in the GitOps ZTP pipeline". Add the SiteConfig CR to the kustomization.yaml file in the generators section, similar to the example shown in out/argocd/example/siteconfig/kustomization.yaml . Commit the SiteConfig CR and associated kustomization.yaml changes in your Git repository and push the changes. The ArgoCD pipeline detects the changes and begins the managed cluster deployment. Verification Verify that the custom roles and labels are applied after the node is deployed: USD oc describe node example-node.example.com Example output Name: example-node.example.com Roles: control-plane,example-label,master,worker Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux custom-label/parameter1=true kubernetes.io/arch=amd64 kubernetes.io/hostname=cnfdf03.telco5gran.eng.rdu2.redhat.com kubernetes.io/os=linux node-role.kubernetes.io/control-plane= node-role.kubernetes.io/example-label= 1 node-role.kubernetes.io/master= node-role.kubernetes.io/worker= node.openshift.io/os_id=rhcos 1 The custom label is applied to the node. Additional resources Single-node OpenShift SiteConfig CR installation reference 4.5.1. Single-node OpenShift SiteConfig CR installation reference Table 4.1. SiteConfig CR installation options for single-node OpenShift clusters SiteConfig CR field Description spec.cpuPartitioningMode Configure workload partitioning by setting the value for cpuPartitioningMode to AllNodes . To complete the configuration, specify the isolated and reserved CPUs in the PerformanceProfile CR. Note Configuring workload partitioning by using the cpuPartitioningMode field in the SiteConfig CR is a Tech Preview feature in OpenShift Container Platform 4.13. metadata.name Set name to assisted-deployment-pull-secret and create the assisted-deployment-pull-secret CR in the same namespace as the SiteConfig CR. spec.clusterImageSetNameRef Configure the image set available on the hub cluster for all the clusters in the site. To see the list of supported versions on your hub cluster, run oc get clusterimagesets . installConfigOverrides Set the installConfigOverrides field to enable or disable optional components prior to cluster installation. Important Use the reference configuration as specified in the example SiteConfig CR. Adding additional components back into the system might require additional reserved CPU capacity. spec.clusters.clusterImageSetNameRef Specifies the cluster image set used to deploy an individual cluster. If defined, it overrides the spec.clusterImageSetNameRef at site level. spec.clusters.clusterLabels Configure cluster labels to correspond to the bindingRules field in the PolicyGenTemplate CRs that you define. For example, policygentemplates/common-ranGen.yaml applies to all clusters with common: true set, policygentemplates/group-du-sno-ranGen.yaml applies to all clusters with group-du-sno: "" set. spec.clusters.crTemplates.KlusterletAddonConfig Optional. Set KlusterletAddonConfig to KlusterletAddonConfigOverride.yaml to override the default `KlusterletAddonConfig that is created for the cluster. spec.clusters.nodes.hostName For single-node deployments, define a single host. For three-node deployments, define three hosts. For standard deployments, define three hosts with role: master and two or more hosts defined with role: worker . spec.clusters.nodes.nodeLabels Specify custom roles for your nodes in your managed clusters. These are additional roles are not used by any OpenShift Container Platform components, only by the user. When you add a custom role, it can be associated with a custom machine config pool that references a specific configuration for that role. Adding custom labels or roles during installation makes the deployment process more effective and prevents the need for additional reboots after the installation is complete. spec.clusters.nodes.automatedCleaningMode Optional. Uncomment and set the value to metadata to enable the removal of the disk's partitioning table only, without fully wiping the disk. The default value is disabled . spec.clusters.nodes.bmcAddress BMC address that you use to access the host. Applies to all cluster types. GitOps ZTP supports iPXE and virtual media booting by using Redfish or IPMI protocols. To use iPXE booting, you must use RHACM 2.8 or later. For more information about BMC addressing, see the "Additional resources" section. spec.clusters.nodes.bmcAddress BMC address that you use to access the host. Applies to all cluster types. GitOps ZTP supports iPXE and virtual media booting by using Redfish or IPMI protocols. To use iPXE booting, you must use RHACM 2.8 or later. For more information about BMC addressing, see the "Additional resources" section. Note In far edge Telco use cases, only virtual media is supported for use with GitOps ZTP. spec.clusters.nodes.bmcCredentialsName Configure the bmh-secret CR that you separately create with the host BMC credentials. When creating the bmh-secret CR, use the same namespace as the SiteConfig CR that provisions the host. spec.clusters.nodes.bootMode Set the boot mode for the host to UEFI . The default value is UEFI . Use UEFISecureBoot to enable secure boot on the host. spec.clusters.nodes.rootDeviceHints Specifies the device for deployment. Identifiers that are stable across reboots are recommended. For example, wwn: <disk_wwn> or deviceName: /dev/disk/by-path/<device_path> . <by-path> values are preferred. For a detailed list of stable identifiers, see the "About root device hints" section. spec.clusters.nodes.ignitionConfigOverride Optional. Use this field to assign partitions for persistent storage. Adjust disk ID and size to the specific hardware. spec.clusters.nodes.nodeNetwork Configure the network settings for the node. spec.clusters.nodes.nodeNetwork.config.interfaces.ipv6 Configure the IPv6 address for the host. For single-node OpenShift clusters with static IP addresses, the node-specific API and Ingress IPs should be the same. Additional resources Customizing extra installation manifests in the GitOps ZTP pipeline Preparing the GitOps ZTP site configuration repository Configuring the hub cluster with ArgoCD Signalling ZTP cluster deployment completion with validator inform policies Creating the managed bare-metal host secrets BMC addressing About root device hints 4.6. Monitoring managed cluster installation progress The ArgoCD pipeline uses the SiteConfig CR to generate the cluster configuration CRs and syncs it with the hub cluster. You can monitor the progress of the synchronization in the ArgoCD dashboard. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure When the synchronization is complete, the installation generally proceeds as follows: The Assisted Service Operator installs OpenShift Container Platform on the cluster. You can monitor the progress of cluster installation from the RHACM dashboard or from the command line by running the following commands: Export the cluster name: USD export CLUSTER=<clusterName> Query the AgentClusterInstall CR for the managed cluster: USD oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.conditions[?(@.type=="Completed")]}' | jq Get the installation events for the cluster: USD curl -sk USD(oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.debugInfo.eventsURL}') | jq '.[-2,-1]' 4.7. Troubleshooting GitOps ZTP by validating the installation CRs The ArgoCD pipeline uses the SiteConfig and PolicyGenTemplate custom resources (CRs) to generate the cluster configuration CRs and Red Hat Advanced Cluster Management (RHACM) policies. Use the following steps to troubleshoot issues that might occur during this process. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Check that the installation CRs were created by using the following command: USD oc get AgentClusterInstall -n <cluster_name> If no object is returned, use the following steps to troubleshoot the ArgoCD pipeline flow from SiteConfig files to the installation CRs. Verify that the ManagedCluster CR was generated using the SiteConfig CR on the hub cluster: USD oc get managedcluster If the ManagedCluster is missing, check if the clusters application failed to synchronize the files from the Git repository to the hub cluster: USD oc get applications.argoproj.io -n openshift-gitops clusters -o yaml To identify error logs for the managed cluster, inspect the status.operationState.syncResult.resources field. For example, if an invalid value is assigned to the extraManifestPath in the SiteConfig CR, an error similar to the following is generated: syncResult: resources: - group: ran.openshift.io kind: SiteConfig message: The Kubernetes API could not find ran.openshift.io/SiteConfig for requested resource spoke-sno/spoke-sno. Make sure the "SiteConfig" CRD is installed on the destination cluster To see a more detailed SiteConfig error, complete the following steps: In the Argo CD dashboard, click the SiteConfig resource that Argo CD is trying to sync. Check the DESIRED MANIFEST tab to find the siteConfigError field. siteConfigError: >- Error: could not build the entire SiteConfig defined by /tmp/kust-plugin-config-1081291903: stat sno-extra-manifest: no such file or directory Check the Status.Sync field. If there are log errors, the Status.Sync field could indicate an Unknown error: Status: Sync: Compared To: Destination: Namespace: clusters-sub Server: https://kubernetes.default.svc Source: Path: sites-config Repo URL: https://git.com/ran-sites/siteconfigs/.git Target Revision: master Status: Unknown 4.8. Troubleshooting GitOps ZTP virtual media booting on Supermicro servers SuperMicro X11 servers do not support virtual media installations when the image is served using the https protocol. As a result, single-node OpenShift deployments for this environment fail to boot on the target node. To avoid this issue, log in to the hub cluster and disable Transport Layer Security (TLS) in the Provisioning resource. This ensures the image is not served with TLS even though the image address uses the https scheme. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Disable TLS in the Provisioning resource by running the following command: USD oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"disableVirtualMediaTLS": true}}' Continue the steps to deploy your single-node OpenShift cluster. 4.9. Removing a managed cluster site from the GitOps ZTP pipeline You can remove a managed site and the associated installation and configuration policy CRs from the GitOps Zero Touch Provisioning (ZTP) pipeline. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Remove a site and the associated CRs by removing the associated SiteConfig and PolicyGenTemplate files from the kustomization.yaml file. Add the following syncOptions field to your SiteConfig application: kind: Application spec: syncPolicy: syncOptions: - PrunePropagationPolicy=background When you run the GitOps ZTP pipeline again, the generated CRs are removed. Optional: If you want to permanently remove a site, you should also remove the SiteConfig and site-specific PolicyGenTemplate files from the Git repository. Optional: If you want to remove a site temporarily, for example when redeploying a site, you can leave the SiteConfig and site-specific PolicyGenTemplate CRs in the Git repository. Additional resources For information about removing a cluster, see Removing a cluster from management . 4.10. Removing obsolete content from the GitOps ZTP pipeline If a change to the PolicyGenTemplate configuration results in obsolete policies, for example, if you rename policies, use the following procedure to remove the obsolete policies. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Remove the affected PolicyGenTemplate files from the Git repository, commit and push to the remote repository. Wait for the changes to synchronize through the application and the affected policies to be removed from the hub cluster. Add the updated PolicyGenTemplate files back to the Git repository, and then commit and push to the remote repository. Note Removing GitOps Zero Touch Provisioning (ZTP) policies from the Git repository, and as a result also removing them from the hub cluster, does not affect the configuration of the managed cluster. The policy and CRs managed by that policy remains in place on the managed cluster. Optional: As an alternative, after making changes to PolicyGenTemplate CRs that result in obsolete policies, you can remove these policies from the hub cluster manually. You can delete policies from the RHACM console using the Governance tab or by running the following command: USD oc delete policy -n <namespace> <policy_name> 4.11. Tearing down the GitOps ZTP pipeline You can remove the ArgoCD pipeline and all generated GitOps Zero Touch Provisioning (ZTP) artifacts. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. Procedure Detach all clusters from Red Hat Advanced Cluster Management (RHACM) on the hub cluster. Delete the kustomization.yaml file in the deployment directory using the following command: USD oc delete -k out/argocd/deployment Commit and push your changes to the site repository. | [
"grep -r \"ztp-deploy-wave\" out/source-crs",
"apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson",
"apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: annotations: argocd.argoproj.io/sync-wave: \"1\" name: \"{{ .Cluster.ClusterName }}\" namespace: \"{{ .Cluster.ClusterName }}\" spec: clusterRef: name: \"{{ .Cluster.ClusterName }}\" namespace: \"{{ .Cluster.ClusterName }}\" kernelArguments: - operation: append 1 value: audit=0 2 - operation: append value: trace=1 sshAuthorizedKey: \"{{ .Site.SshPublicKey }}\" proxy: \"{{ .Cluster.ProxySettings }}\" pullSecretRef: name: \"{{ .Site.PullSecretRef.Name }}\" ignitionConfigOverride: \"{{ .Cluster.IgnitionConfigOverride }}\" nmStateConfigLabelSelector: matchLabels: nmstate-label: \"{{ .Cluster.ClusterName }}\" additionalNTPSources: \"{{ .Cluster.AdditionalNTPSources }}\"",
"~/example-ztp/install └── site-install ├── siteconfig-example.yaml ├── InfraEnv-example.yaml",
"clusters: crTemplates: InfraEnv: \"InfraEnv-example.yaml\"",
"ssh -i /path/to/privatekey core@<host_name>",
"cat /proc/cmdline",
"export CLUSTERNS=example-sno",
"oc create namespace USDCLUSTERNS",
"example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"example-sno\" namespace: \"example-sno\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.10\" sshPublicKey: \"ssh-rsa AAAA...\" clusters: - clusterName: \"example-sno\" networkType: \"OVNKubernetes\" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all but the marketplace component from the optional set of # components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier installConfigOverrides: | { \"capabilities\": { \"baselineCapabilitySet\": \"None\", \"additionalEnabledCapabilities\": [ \"NodeTuning\", \"OperatorLifecycleManager\" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: \"latest\" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: \"\"' group-du-sno: \"\" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: \"example-sno\"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites : \"example-sno\" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: #crTemplates: # KlusterletAddonConfig: \"KlusterletAddonConfigOverride.yaml\" nodes: - hostName: \"example-node1.example.com\" role: \"master\" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: \"example-hw.profile\" bmcAddress: \"idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"example-node1-bmh-secret\" bootMACAddress: \"AA:BB:CC:DD:EE:11\" # Use UEFISecureBoot to enable secure boot bootMode: \"UEFI\" rootDeviceHints: deviceName: \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\", \"partitions\": [ { \"label\": \"var-lib-containers\", \"sizeMiB\": 0, \"startMiB\": 250000 } ], \"wipeTable\": false } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var-lib-containers\", \"format\": \"xfs\", \"mountOptions\": [ \"defaults\", \"prjquota\" ], \"path\": \"/var/lib/containers\", \"wipeFilesystem\": true } ] }, \"systemd\": { \"units\": [ { \"contents\": \"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\", \"enabled\": true, \"name\": \"var-lib-containers.mount\" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254",
"oc describe node example-node.example.com",
"Name: example-node.example.com Roles: control-plane,example-label,master,worker Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux custom-label/parameter1=true kubernetes.io/arch=amd64 kubernetes.io/hostname=cnfdf03.telco5gran.eng.rdu2.redhat.com kubernetes.io/os=linux node-role.kubernetes.io/control-plane= node-role.kubernetes.io/example-label= 1 node-role.kubernetes.io/master= node-role.kubernetes.io/worker= node.openshift.io/os_id=rhcos",
"export CLUSTER=<clusterName>",
"oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.conditions[?(@.type==\"Completed\")]}' | jq",
"curl -sk USD(oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.debugInfo.eventsURL}') | jq '.[-2,-1]'",
"oc get AgentClusterInstall -n <cluster_name>",
"oc get managedcluster",
"oc get applications.argoproj.io -n openshift-gitops clusters -o yaml",
"syncResult: resources: - group: ran.openshift.io kind: SiteConfig message: The Kubernetes API could not find ran.openshift.io/SiteConfig for requested resource spoke-sno/spoke-sno. Make sure the \"SiteConfig\" CRD is installed on the destination cluster",
"siteConfigError: >- Error: could not build the entire SiteConfig defined by /tmp/kust-plugin-config-1081291903: stat sno-extra-manifest: no such file or directory",
"Status: Sync: Compared To: Destination: Namespace: clusters-sub Server: https://kubernetes.default.svc Source: Path: sites-config Repo URL: https://git.com/ran-sites/siteconfigs/.git Target Revision: master Status: Unknown",
"oc patch provisioning provisioning-configuration --type merge -p '{\"spec\":{\"disableVirtualMediaTLS\": true}}'",
"kind: Application spec: syncPolicy: syncOptions: - PrunePropagationPolicy=background",
"oc delete policy -n <namespace> <policy_name>",
"oc delete -k out/argocd/deployment"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/edge_computing/ztp-deploying-far-edge-sites |
4.216. pam_ldap | 4.216. pam_ldap 4.216.1. RHBA-2011:1701 - pam_ldap bug fix update An updated pam_ldap package that fixes various bugs is now available for Red Hat Enterprise Linux 6. The pam_ldap package provides the pam_ldap.so module, which allows PAM-aware applications to check user passwords with the help of a directory server. Bug Fixes BZ# 735375 All entries stored in an LDAP directory have a unique "Distinguished Name," or DN. The top level of the LDAP directory tree is the base, referred to as the "base DN". When the host option is not set the pam_ldap.so module uses DNS SRV records to determine which servers to contact to perform authentication. Prior to this update, the LDAP base "DN" option was derived from the DNS domain name, ignoring any value that had been set for "base" in the pam_ldap.conf file. As a consequence, ldap lookups failed. The "base" setting in the configuration file, pam_ldap.conf, is now correctly parsed before the configuration is read from DNS. As a result, "base" is now set to the value given in the pam_ldap.conf file and is used as the base (starting point) to search for a user's credentials. BZ# 688747 Prior to this update, pam_ldap (when called by a process which was running as root) did not re-authenticate the user during a password change operation. As a consequence, reuse of the old password was not prevented. This update backports a fix to ensure that when a privileged application initiates a password change operation, for a user whose password has expired, that the current password is again verified, allowing client-side password-quality checking to be performed when using LDAP accounts. All users of pam_ldap are advised to upgrade to this updated package, which fixes these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/pam_ldap |
Chapter 16. Using the Stream Control Transmission Protocol (SCTP) | Chapter 16. Using the Stream Control Transmission Protocol (SCTP) As a cluster administrator, you can use the Stream Control Transmission Protocol (SCTP) on a bare-metal cluster. 16.1. Support for SCTP on OpenShift Container Platform As a cluster administrator, you can enable SCTP on the hosts in the cluster. On Red Hat Enterprise Linux CoreOS (RHCOS), the SCTP module is disabled by default. SCTP is a reliable message based protocol that runs on top of an IP network. When enabled, you can use SCTP as a protocol with pods, services, and network policy. A Service object must be defined with the type parameter set to either the ClusterIP or NodePort value. 16.1.1. Example configurations using SCTP protocol You can configure a pod or service to use SCTP by setting the protocol parameter to the SCTP value in the pod or service object. In the following example, a pod is configured to use SCTP: apiVersion: v1 kind: Pod metadata: namespace: project1 name: example-pod spec: containers: - name: example-pod ... ports: - containerPort: 30100 name: sctpserver protocol: SCTP In the following example, a service is configured to use SCTP: apiVersion: v1 kind: Service metadata: namespace: project1 name: sctpserver spec: ... ports: - name: sctpserver protocol: SCTP port: 30100 targetPort: 30100 type: ClusterIP In the following example, a NetworkPolicy object is configured to apply to SCTP network traffic on port 80 from any pods with a specific label: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-sctp-on-http spec: podSelector: matchLabels: role: web ingress: - ports: - protocol: SCTP port: 80 16.2. Enabling Stream Control Transmission Protocol (SCTP) As a cluster administrator, you can load and enable the blacklisted SCTP kernel module on worker nodes in your cluster. Prerequisites Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure Create a file named load-sctp-module.yaml that contains the following YAML definition: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: load-sctp-module labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/sctp-blacklist.conf mode: 0644 overwrite: true contents: source: data:, - path: /etc/modules-load.d/sctp-load.conf mode: 0644 overwrite: true contents: source: data:,sctp To create the MachineConfig object, enter the following command: USD oc create -f load-sctp-module.yaml Optional: To watch the status of the nodes while the MachineConfig Operator applies the configuration change, enter the following command. When the status of a node transitions to Ready , the configuration update is applied. USD oc get nodes 16.3. Verifying Stream Control Transmission Protocol (SCTP) is enabled You can verify that SCTP is working on a cluster by creating a pod with an application that listens for SCTP traffic, associating it with a service, and then connecting to the exposed service. Prerequisites Access to the internet from the cluster to install the nc package. Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure Create a pod starts an SCTP listener: Create a file named sctp-server.yaml that defines a pod with the following YAML: apiVersion: v1 kind: Pod metadata: name: sctpserver labels: app: sctpserver spec: containers: - name: sctpserver image: registry.access.redhat.com/ubi9/ubi command: ["/bin/sh", "-c"] args: ["dnf install -y nc && sleep inf"] ports: - containerPort: 30102 name: sctpserver protocol: SCTP Create the pod by entering the following command: USD oc create -f sctp-server.yaml Create a service for the SCTP listener pod. Create a file named sctp-service.yaml that defines a service with the following YAML: apiVersion: v1 kind: Service metadata: name: sctpservice labels: app: sctpserver spec: type: NodePort selector: app: sctpserver ports: - name: sctpserver protocol: SCTP port: 30102 targetPort: 30102 To create the service, enter the following command: USD oc create -f sctp-service.yaml Create a pod for the SCTP client. Create a file named sctp-client.yaml with the following YAML: apiVersion: v1 kind: Pod metadata: name: sctpclient labels: app: sctpclient spec: containers: - name: sctpclient image: registry.access.redhat.com/ubi9/ubi command: ["/bin/sh", "-c"] args: ["dnf install -y nc && sleep inf"] To create the Pod object, enter the following command: USD oc apply -f sctp-client.yaml Run an SCTP listener on the server. To connect to the server pod, enter the following command: USD oc rsh sctpserver To start the SCTP listener, enter the following command: USD nc -l 30102 --sctp Connect to the SCTP listener on the server. Open a new terminal window or tab in your terminal program. Obtain the IP address of the sctpservice service. Enter the following command: USD oc get services sctpservice -o go-template='{{.spec.clusterIP}}{{"\n"}}' To connect to the client pod, enter the following command: USD oc rsh sctpclient To start the SCTP client, enter the following command. Replace <cluster_IP> with the cluster IP address of the sctpservice service. # nc <cluster_IP> 30102 --sctp | [
"apiVersion: v1 kind: Pod metadata: namespace: project1 name: example-pod spec: containers: - name: example-pod ports: - containerPort: 30100 name: sctpserver protocol: SCTP",
"apiVersion: v1 kind: Service metadata: namespace: project1 name: sctpserver spec: ports: - name: sctpserver protocol: SCTP port: 30100 targetPort: 30100 type: ClusterIP",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-sctp-on-http spec: podSelector: matchLabels: role: web ingress: - ports: - protocol: SCTP port: 80",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: load-sctp-module labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/sctp-blacklist.conf mode: 0644 overwrite: true contents: source: data:, - path: /etc/modules-load.d/sctp-load.conf mode: 0644 overwrite: true contents: source: data:,sctp",
"oc create -f load-sctp-module.yaml",
"oc get nodes",
"apiVersion: v1 kind: Pod metadata: name: sctpserver labels: app: sctpserver spec: containers: - name: sctpserver image: registry.access.redhat.com/ubi9/ubi command: [\"/bin/sh\", \"-c\"] args: [\"dnf install -y nc && sleep inf\"] ports: - containerPort: 30102 name: sctpserver protocol: SCTP",
"oc create -f sctp-server.yaml",
"apiVersion: v1 kind: Service metadata: name: sctpservice labels: app: sctpserver spec: type: NodePort selector: app: sctpserver ports: - name: sctpserver protocol: SCTP port: 30102 targetPort: 30102",
"oc create -f sctp-service.yaml",
"apiVersion: v1 kind: Pod metadata: name: sctpclient labels: app: sctpclient spec: containers: - name: sctpclient image: registry.access.redhat.com/ubi9/ubi command: [\"/bin/sh\", \"-c\"] args: [\"dnf install -y nc && sleep inf\"]",
"oc apply -f sctp-client.yaml",
"oc rsh sctpserver",
"nc -l 30102 --sctp",
"oc get services sctpservice -o go-template='{{.spec.clusterIP}}{{\"\\n\"}}'",
"oc rsh sctpclient",
"nc <cluster_IP> 30102 --sctp"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/networking/using-sctp |
Interactively installing RHEL over the network | Interactively installing RHEL over the network Red Hat Enterprise Linux 9 Installing RHEL on several systems using network resources or on a headless system with the graphical installer Red Hat Customer Content Services | [
"dnf install nfs-utils",
"/ exported_directory / clients",
"/rhel9-install *",
"systemctl start nfs-server.service",
"systemctl reload nfs-server.service",
"mkdir /mnt/rhel9-install/",
"mount -o loop,ro -t iso9660 /image_directory/image.iso /mnt/rhel9-install/",
"cp -r /mnt/rhel9-install/ /var/www/html/",
"systemctl start httpd.service",
"systemctl enable firewalld",
"systemctl start firewalld",
"firewall-cmd --add-port min_port - max_port /tcp --permanent firewall-cmd --add-service ftp --permanent",
"firewall-cmd --reload",
"mkdir /mnt/rhel9-install",
"mount -o loop,ro -t iso9660 /image-directory/image.iso /mnt/rhel9-install",
"mkdir /var/ftp/rhel9-install cp -r /mnt/rhel9-install/ /var/ftp/",
"restorecon -r /var/ftp/rhel9-install find /var/ftp/rhel9-install -type f -exec chmod 444 {} \\; find /var/ftp/rhel9-install -type d -exec chmod 755 {} \\;",
"systemctl start vsftpd.service",
"systemctl restart vsftpd.service",
"systemctl enable vsftpd",
"dnf install dhcp-server",
"option architecture-type code 93 = unsigned integer 16; subnet 192.168.124.0 netmask 255.255.255.0 { option routers 192.168.124.1 ; option domain-name-servers 192.168.124.1 ; range 192.168.124.100 192.168.124.200 ; class \"pxeclients\" { match if substring (option vendor-class-identifier, 0, 9) = \"PXEClient\"; next-server 192.168.124.2 ; if option architecture-type = 00:07 { filename \"redhat/EFI/BOOT/BOOTX64.EFI\"; } else { filename \"pxelinux/pxelinux.0\"; } } class \"httpclients\" { match if substring (option vendor-class-identifier, 0, 10) = \"HTTPClient\"; option vendor-class-identifier \"HTTPClient\"; filename \"http:// 192.168.124.2 /redhat/EFI/BOOT/BOOTX64.EFI\"; } }",
"systemctl enable --now dhcpd",
"dnf install dhcp-server",
"option dhcp6.bootfile-url code 59 = string; option dhcp6.vendor-class code 16 = {integer 32, integer 16, string}; subnet6 fd33:eb1b:9b36::/64 { range6 fd33:eb1b:9b36::64 fd33:eb1b:9b36::c8 ; class \"PXEClient\" { match substring (option dhcp6.vendor-class, 6, 9); } subclass \"PXEClient\" \"PXEClient\" { option dhcp6.bootfile-url \"tftp:// [fd33:eb1b:9b36::2] /redhat/EFI/BOOT/BOOTX64.EFI\"; } class \"HTTPClient\" { match substring (option dhcp6.vendor-class, 6, 10); } subclass \"HTTPClient\" \"HTTPClient\" { option dhcp6.bootfile-url \"http:// [fd33:eb1b:9b36::2] /redhat/EFI/BOOT/BOOTX64.EFI\"; option dhcp6.vendor-class 0 10 \"HTTPClient\"; } }",
"systemctl enable --now dhcpd6",
"IPv6_rpfilter=no",
"dnf install httpd",
"mkdir -p /var/www/html/redhat/",
"mkdir -p /var/www/html/redhat/iso/",
"mount -o loop,ro -t iso9660 path-to-RHEL-DVD.iso /var/www/html/redhat/iso",
"cp -r /var/www/html/redhat/iso/images /var/www/html/redhat/ cp -r /var/www/html/redhat/iso/EFI /var/www/html/redhat/",
"chmod 644 /var/www/html/redhat/EFI/BOOT/grub.cfg",
"set default=\"1\" function load_video { insmod efi_gop insmod efi_uga insmod video_bochs insmod video_cirrus insmod all_video } load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod ext2 set timeout=60 # END /etc/grub.d/00_header # search --no-floppy --set=root -l ' RHEL-9-3-0-BaseOS-x86_64 ' # BEGIN /etc/grub.d/10_linux # menuentry 'Install Red Hat Enterprise Linux 9.3 ' --class fedora --class gnu-linux --class gnu --class os { linuxefi ../../images/pxeboot/vmlinuz inst.repo=http:// 192.168.124.2 /redhat/iso quiet initrdefi ../../images/pxeboot/initrd.img } menuentry 'Test this media & install Red Hat Enterprise Linux 9.3 ' --class fedora --class gnu-linux --class gnu --class os { linuxefi ../../images/pxeboot/vmlinuz inst.repo=http:// 192.168.124.2 /redhat/iso quiet initrdefi ../../images/pxeboot/initrd.img } submenu 'Troubleshooting -->' { menuentry 'Install Red Hat Enterprise Linux 9.3 in text mode' --class fedora --class gnu-linux --class gnu --class os { linuxefi ../../images/pxeboot/vmlinuz inst.repo=http:// 192.168.124.2 /redhat/iso inst.text quiet initrdefi ../../images/pxeboot/initrd.img } menuentry 'Rescue a Red Hat Enterprise Linux system' --class fedora --class gnu-linux --class gnu --class os { linuxefi ../../images/pxeboot/vmlinuz inst.repo=http:// 192.168.124.2 /redhat/iso inst.rescue quiet initrdefi ../../images/pxeboot/initrd.img } }",
"chmod 755 /var/www/html/redhat/EFI/BOOT/BOOTX64.EFI",
"firewall-cmd --zone public --add-port={80/tcp,67/udp,68/udp,546/udp,547/udp}",
"firewall-cmd --reload",
"systemctl enable --now httpd",
"chmod -cR u=rwX,g=rX,o=rX /var/www/html",
"restorecon -FvvR /var/www/html",
"dnf install dhcp-server",
"option architecture-type code 93 = unsigned integer 16; subnet 192.168.124.0 netmask 255.255.255.0 { option routers 192.168.124.1 ; option domain-name-servers 192.168.124.1 ; range 192.168.124.100 192.168.124.200 ; class \"pxeclients\" { match if substring (option vendor-class-identifier, 0, 9) = \"PXEClient\"; next-server 192.168.124.2 ; if option architecture-type = 00:07 { filename \"redhat/EFI/BOOT/BOOTX64.EFI\"; } else { filename \"pxelinux/pxelinux.0\"; } } class \"httpclients\" { match if substring (option vendor-class-identifier, 0, 10) = \"HTTPClient\"; option vendor-class-identifier \"HTTPClient\"; filename \"http:// 192.168.124.2 /redhat/EFI/BOOT/BOOTX64.EFI\"; } }",
"systemctl enable --now dhcpd",
"dnf install dhcp-server",
"option dhcp6.bootfile-url code 59 = string; option dhcp6.vendor-class code 16 = {integer 32, integer 16, string}; subnet6 fd33:eb1b:9b36::/64 { range6 fd33:eb1b:9b36::64 fd33:eb1b:9b36::c8 ; class \"PXEClient\" { match substring (option dhcp6.vendor-class, 6, 9); } subclass \"PXEClient\" \"PXEClient\" { option dhcp6.bootfile-url \"tftp:// [fd33:eb1b:9b36::2] /redhat/EFI/BOOT/BOOTX64.EFI\"; } class \"HTTPClient\" { match substring (option dhcp6.vendor-class, 6, 10); } subclass \"HTTPClient\" \"HTTPClient\" { option dhcp6.bootfile-url \"http:// [fd33:eb1b:9b36::2] /redhat/EFI/BOOT/BOOTX64.EFI\"; option dhcp6.vendor-class 0 10 \"HTTPClient\"; } }",
"systemctl enable --now dhcpd6",
"IPv6_rpfilter=no",
"dnf install tftp-server",
"firewall-cmd --add-service=tftp",
"mount -t iso9660 /path_to_image/name_of_image.iso /mount_point -o loop,ro",
"cp -pr /mount_point/AppStream/Packages/syslinux-tftpboot-version-architecture.rpm /my_local_directory",
"umount /mount_point",
"rpm2cpio syslinux-tftpboot-version-architecture.rpm | cpio -dimv",
"mkdir /var/lib/tftpboot/pxelinux",
"cp /my_local_directory/tftpboot/* /var/lib/tftpboot/pxelinux",
"mkdir /var/lib/tftpboot/pxelinux/pxelinux.cfg",
"default vesamenu.c32 prompt 1 timeout 600 display boot.msg label linux menu label ^Install system menu default kernel images/RHEL-9/vmlinuz append initrd=images/RHEL-9/initrd.img ip=dhcp inst.repo=http:// 192.168.124.2 /RHEL-9/x86_64/iso-contents-root/ label vesa menu label Install system with ^basic video driver kernel images/RHEL-9/vmlinuz append initrd=images/RHEL-9/initrd.img ip=dhcp inst.xdriver=vesa nomodeset inst.repo=http:// 192.168.124.2 /RHEL-9/x86_64/iso-contents-root/ label rescue menu label ^Rescue installed system kernel images/RHEL-9/vmlinuz append initrd=images/RHEL-9/initrd.img inst.rescue inst.repo=http:///192.168.124.2/RHEL-8/x86_64/iso-contents-root/ label local menu label Boot from ^local drive localboot 0xffff",
"mkdir -p /var/lib/tftpboot/pxelinux/images/RHEL-9/ cp /path_to_x86_64_images/pxeboot/{vmlinuz,initrd.img} /var/lib/tftpboot/pxelinux/images/RHEL-9/",
"systemctl enable --now tftp.socket",
"dnf install tftp-server",
"firewall-cmd --add-service=tftp",
"mount -t iso9660 /path_to_image/name_of_image.iso /mount_point -o loop,ro",
"mkdir /var/lib/tftpboot/redhat cp -r /mount_point/EFI /var/lib/tftpboot/redhat/ umount /mount_point",
"chmod -R 755 /var/lib/tftpboot/redhat/",
"set timeout=60 menuentry 'RHEL 9' { linux images/RHEL-9/vmlinuz ip=dhcp inst.repo=http:// 192.168.124.2 /RHEL-9/x86_64/iso-contents-root/ initrd images/RHEL-9/initrd.img }",
"mkdir -p /var/lib/tftpboot/images/RHEL-9/ cp /path_to_x86_64_images/pxeboot/{vmlinuz,initrd.img}/var/lib/tftpboot/images/RHEL-9/",
"systemctl enable --now tftp.socket",
"dnf install tftp-server dhcp-server",
"firewall-cmd --add-service=tftp",
"grub2-mknetdir --net-directory=/var/lib/tftpboot Netboot directory for powerpc-ieee1275 created. Configure your DHCP server to point to /boot/grub2/powerpc-ieee1275/core.elf",
"dnf install grub2-ppc64le-modules",
"set default=0 set timeout=5 echo -e \"\\nWelcome to the Red Hat Enterprise Linux 9 installer!\\n\\n\" menuentry 'Red Hat Enterprise Linux 9' { linux grub2-ppc64/vmlinuz ro ip=dhcp inst.repo=http:// 192.168.124.2 /RHEL-9/x86_64/iso-contents-root/ initrd grub2-ppc64/initrd.img }",
"mount -t iso9660 /path_to_image/name_of_iso/ /mount_point -o loop,ro",
"cp /mount_point/ppc/ppc64/{initrd.img,vmlinuz} /var/lib/tftpboot/grub2-ppc64/",
"subnet 192.168.0.1 netmask 255.255.255.0 { allow bootp; option routers 192.168.0.5; group { #BOOTP POWER clients filename \"boot/grub2/powerpc-ieee1275/core.elf\"; host client1 { hardware ethernet 01:23:45:67:89:ab; fixed-address 192.168.0.112; } } }",
"systemctl enable --now dhcpd",
"systemctl enable --now tftp.socket",
"mokutil --import /usr/share/doc/kernel-keys/USD(uname -r)/kernel-signing-ca.cer",
"mokutil --reset",
"rd.znet=qeth,0.0.0600,0.0.0601,0.0.0602,layer2=1,portno= <number>",
"rd.dasd=0.0.0200 rd.dasd=0.0.0202(ro),0.0.0203(ro:failfast),0.0.0205-0.0.0207",
"rd.zfcp=0.0.4000,0x5005076300C213e9,0x5022000000000000 rd.zfcp=0.0.4000",
"ro ramdisk_size=40000 cio_ignore=all,!condev inst.repo=http://example.com/path/to/repository rd.znet=qeth,0.0.0600,0.0.0601,0.0.0602,layer2=1,portno=0,portname=foo ip=192.168.17.115::192.168.17.254:24:foobar.systemz.example.com:enc600:none nameserver=192.168.17.1 rd.dasd=0.0.0200 rd.dasd=0.0.0202 rd.zfcp=0.0.4000,0x5005076300c213e9,0x5022000000000000 rd.zfcp=0.0.5000,0x5005076300dab3e9,0x5022000000000000 inst.ks=http://example.com/path/to/kickstart",
"images/kernel.img 0x00000000 images/initrd.img 0x02000000 images/genericdvd.prm 0x00010480 images/initrd.addrsize 0x00010408",
"qeth: SUBCHANNELS=\" read_device_bus_id , write_device_bus_id , data_device_bus_id \" lcs or ctc: SUBCHANNELS=\" read_device_bus_id , write_device_bus_id \"",
"SUBCHANNELS=\"0.0.f5f0,0.0.f5f1,0.0.f5f2\"",
"DNS=\"10.1.2.3:10.3.2.1\"",
"SEARCHDNS=\"subdomain.domain:domain\"",
"DASD=\"eb1c,0.0.a000-0.0.a003,eb10-eb14(diag),0.0.ab1c(ro:diag)\"",
"FCP_ n =\" device_bus_ID [ WWPN FCP_LUN ]\"",
"FCP_1=\"0.0.fc00 0x50050763050b073d 0x4020400100000000\" FCP_2=\"0.0.4000\"",
"inst.stage2=http://hostname/path_to_install_tree/ inst.stage2=http://hostname/path_to_install_tree/ inst.stage2=http://hostname/path_to_install_tree/",
"ro ramdisk_size=40000 cio_ignore=all,!condev CMSDASD=\"191\" CMSCONFFILE=\"redhat.conf\" inst.vnc inst.repo=http://example.com/path/to/dvd-contents",
"NETTYPE=\"qeth\" SUBCHANNELS=\"0.0.0600,0.0.0601,0.0.0602\" PORTNAME=\"FOOBAR\" PORTNO=\"0\" LAYER2=\"1\" MACADDR=\"02:00:be:3a:01:f3\" HOSTNAME=\"foobar.systemz.example.com\" IPADDR=\"192.168.17.115\" NETMASK=\"255.255.255.0\" GATEWAY=\"192.168.17.254\" DNS=\"192.168.17.1\" SEARCHDNS=\"systemz.example.com:example.com\" DASD=\"200-203\"",
"logon user here",
"cp ipl cms",
"query disk",
"cp query virtual storage",
"cp query virtual osa",
"cp query virtual dasd",
"cp query virtual fcp",
"virt-install --name=<guest_name> --disk size=<disksize_in_GB> --memory=<memory_size_in_MB> --cdrom <filepath_to_iso> --graphics vnc",
"cp link tcpmaint 592 592 acc 592 fm",
"ftp <host> (secure",
"cd / location/of/install-tree /images/ ascii get generic.prm (repl get redhat.exec (repl locsite fix 80 binary get kernel.img (repl get initrd.img (repl quit",
"VMUSER FILELIST A0 V 169 Trunc=169 Size=6 Line=1 Col=1 Alt=0 Cmd Filename Filetype Fm Format Lrecl Records Blocks Date Time REDHAT EXEC B1 V 22 1 1 4/15/10 9:30:40 GENERIC PRM B1 V 44 1 1 4/15/10 9:30:32 INITRD IMG B1 F 80 118545 2316 4/15/10 9:30:25 KERNEL IMG B1 F 80 74541 912 4/15/10 9:30:17",
"redhat",
"cp ipl DASD_device_number loadparm boot_entry_number",
"cp ipl eb1c loadparm 0",
"cp set loaddev portname WWPN lun LUN bootprog boot_entry_number",
"cp set loaddev portname 50050763 050b073d lun 40204011 00000000 bootprog 0",
"query loaddev",
"cp ipl FCP_device",
"cp ipl fc00",
">vmlinuz initrd=initrd.img inst.stage2=hd:LABEL=RHEL-9-5-0-BaseOS-x86_64 rd.live.check quiet fips=1",
"linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-9-4-0-BaseOS-x86_64 rd.live. check quiet fips=1",
"modprobe.blacklist=ahci",
"vncviewer -listen PORT",
"TigerVNC Viewer 64-bit v1.8.0 Built on: 2017-10-12 09:20 Copyright (C) 1999-2017 TigerVNC Team and many others (see README.txt) See http://www.tigervnc.org for information about TigerVNC. Thu Jun 27 11:30:57 2019 main: Listening on port 5500",
"oscap xccdf eval --profile ospp --report eval_postinstall_report.html /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml",
"subscription-manager register --activationkey= <activation_key_name> --org= <organization_ID>",
"The system has been registered with id: 62edc0f8-855b-4184-b1b8-72a9dc793b96",
"subscription-manager syspurpose role --set \"VALUE\"",
"subscription-manager syspurpose role --set \"Red Hat Enterprise Linux Server\"",
"subscription-manager syspurpose role --list",
"subscription-manager syspurpose role --unset",
"subscription-manager syspurpose service-level --set \"VALUE\"",
"subscription-manager syspurpose service-level --set \"Standard\"",
"subscription-manager syspurpose service-level --list",
"subscription-manager syspurpose service-level --unset",
"subscription-manager syspurpose usage --set \"VALUE\"",
"subscription-manager syspurpose usage --set \"Production\"",
"subscription-manager syspurpose usage --list",
"subscription-manager syspurpose usage --unset",
"subscription-manager syspurpose --show",
"man subscription-manager",
"subscription-manager status +-------------------------------------------+ System Status Details +-------------------------------------------+ Overall Status: Current System Purpose Status: Matched",
"subscription-manager status +-------------------------------------------+ System Status Details +-------------------------------------------+ Overall Status: Disabled Content Access Mode is set to Simple Content Access. This host has access to content, regardless of subscription status. System Purpose Status: Disabled",
"subscription-manager unregister",
"CP ATTACH EB1C TO *",
"CP LINK RHEL7X 4B2E 4B2E MR DASD 4B2E LINKED R/W",
"cio_ignore -r device_number",
"cio_ignore -r 4b2e",
"chccwdev -e device_number",
"chccwdev -e 4b2e",
"cd /root # dasdfmt -b 4096 -d cdl -p /dev/disk/by-path/ccw-0.0.4b2e Drive Geometry: 10017 Cylinders * 15 Heads = 150255 Tracks I am going to format the device /dev/disk/by-path/ccw-0.0.4b2e in the following way: Device number of device : 0x4b2e Labelling device : yes Disk label : VOL1 Disk identifier : 0X4B2E Extent start (trk no) : 0 Extent end (trk no) : 150254 Compatible Disk Layout : yes Blocksize : 4096 --->> ATTENTION! <<--- All data of that device will be lost. Type \"yes\" to continue, no will leave the disk untouched: yes cyl 97 of 3338 |#----------------------------------------------| 2%",
"Rereading the partition table Exiting",
"fdasd -a /dev/disk/by-path/ccw-0.0.4b2e reading volume label ..: VOL1 reading vtoc ..........: ok auto-creating one partition for the whole disk writing volume label writing VTOC rereading partition table",
"machine_id=USD(cat /etc/machine-id) kernel_version=USD(uname -r) ls /boot/loader/entries/USDmachine_id-USDkernel_version.conf",
"title Red Hat Enterprise Linux (4.18.0-80.el8.s390x) 8.0 (Ootpa) version 4.18.0-80.el8.s390x linux /boot/vmlinuz-4.18.0-80.el8.s390x initrd /boot/initramfs-4.18.0-80.el8.s390x.img options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.dasd=0.0.0200 rd.dasd=0.0.0207 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0 id rhel-20181027190514-4.18.0-80.el8.s390x grub_users USDgrub_users grub_arg --unrestricted grub_class kernel",
"title Red Hat Enterprise Linux (4.18.0-80.el8.s390x) 8.0 (Ootpa) version 4.18.0-80.el8.s390x linux /boot/vmlinuz-4.18.0-80.el8.s390x initrd /boot/initramfs-4.18.0-80.el8.s390x.img options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.dasd=0.0.0200 rd.dasd=0.0.0207 rd.dasd=0.0.202b rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0 id rhel-20181027190514-4.18.0-80.el8.s390x grub_users USDgrub_users grub_arg --unrestricted grub_class kernel",
"zipl -V Using config file '/etc/zipl.conf' Using BLS config file '/boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-4.18.0-80.el8.s390x.conf' Target device information Device..........................: 5e:00 Partition.......................: 5e:01 Device name.....................: dasda Device driver name..............: dasd DASD device number..............: 0201 Type............................: disk partition Disk layout.....................: ECKD/compatible disk layout Geometry - heads................: 15 Geometry - sectors..............: 12 Geometry - cylinders............: 13356 Geometry - start................: 24 File system block size..........: 4096 Physical block size.............: 4096 Device size in physical blocks..: 262152 Building bootmap in '/boot' Building menu 'zipl-automatic-menu' Adding #1: IPL section '4.18.0-80.el8.s390x' (default) initial ramdisk...: /boot/initramfs-4.18.0-80.el8.s390x.img kernel image......: /boot/vmlinuz-4.18.0-80.el8.s390x kernel parmline...: 'root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.dasd=0.0.0200 rd.dasd=0.0.0207 rd.dasd=0.0.202b rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0' component address: kernel image....: 0x00010000-0x0049afff parmline........: 0x0049b000-0x0049bfff initial ramdisk.: 0x004a0000-0x01a26fff internal loader.: 0x0000a000-0x0000cfff Preparing boot menu Interactive prompt......: enabled Menu timeout............: 5 seconds Default configuration...: '4.18.0-80.el8.s390x' Preparing boot device: dasda (0201). Syncing disks Done.",
"0.0.0207 0.0.0200 use_diag=1 readonly=1",
"cio_ignore -r device_number",
"cio_ignore -r 021a",
"echo add > /sys/bus/ccw/devices/ dasd-bus-ID /uevent",
"echo add > /sys/bus/ccw/devices/0.0.021a/uevent",
"machine_id=USD(cat /etc/machine-id) kernel_version=USD(uname -r) ls /boot/loader/entries/USDmachine_id-USDkernel_version.conf",
"title Red Hat Enterprise Linux (5.14.0-55.el9.s390x) 9.0 (Plow) version 5.14.0-55.el9.s390x linux /boot/vmlinuz-5.14.0-55.el9.s390x initrd /boot/initramfs-5.14.0-55.el9.s390x.img options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a000000000 rd.zfcp=0.0.fcd0,0x5105074308c2aee9,0x401040a000000000 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0 id rhel-20181027190514-5.14.0-55.el9.s390x grub_users USDgrub_users grub_arg --unrestricted grub_class kernel",
"title Red Hat Enterprise Linux (5.14.0-55.el9.s390x) 9.0 (Plow) version 5.14.0-55.el9.s390x linux /boot/vmlinuz-5.14.0-55.el9.s390x initrd /boot/initramfs-5.14.0-55.el9.s390x.img options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a000000000 rd.zfcp=0.0.fcd0,0x5105074308c2aee9,0x401040a000000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a300000000 rd.zfcp=0.0.fcd0,0x5105074308c2aee9,0x401040a300000000 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0 id rhel-20181027190514-5.14.0-55.el9.s390x grub_users USDgrub_users grub_arg --unrestricted grub_class kernel",
"zipl -V Using config file '/etc/zipl.conf' Using BLS config file '/boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-5.14.0-55.el9.s390x.conf' Run /lib/s390-tools/zipl_helper.device-mapper /boot Target device information Device..........................: fd:00 Partition.......................: fd:01 Device name.....................: dm-0 Device driver name..............: device-mapper Type............................: disk partition Disk layout.....................: SCSI disk layout Geometry - start................: 2048 File system block size..........: 4096 Physical block size.............: 512 Device size in physical blocks..: 10074112 Building bootmap in '/boot/' Building menu 'zipl-automatic-menu' Adding #1: IPL section '5.14.0-55.el9.s390x' (default) kernel image......: /boot/vmlinuz-5.14.0-55.el9.s390x kernel parmline...: 'root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a000000000 rd.zfcp=0.0.fcd0,0x5105074308c2aee9,0x401040a000000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a300000000 rd.zfcp=0.0.fcd0,0x5105074308c2aee9,0x401040a300000000 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0' initial ramdisk...: /boot/initramfs-5.14.0-55.el9.s390x.img component address: kernel image....: 0x00010000-0x007a21ff parmline........: 0x00001000-0x000011ff initial ramdisk.: 0x02000000-0x028f63ff internal loader.: 0x0000a000-0x0000a3ff Preparing boot device: dm-0. Detected SCSI PCBIOS disk layout. Writing SCSI master boot record. Syncing disks Done.",
"0.0.fc00 0x5105074308c212e9 0x401040a000000000 0.0.fc00 0x5105074308c212e9 0x401040a100000000 0.0.fc00 0x5105074308c212e9 0x401040a300000000 0.0.fcd0 0x5105074308c2aee9 0x401040a000000000 0.0.fcd0 0x5105074308c2aee9 0x401040a100000000 0.0.fcd0 0x5105074308c2aee9 0x401040a300000000 0.0.4000 0.0.5000",
"zfcp_cio_free",
"zfcpconf.sh",
"lsmod | grep qeth qeth_l3 69632 0 qeth_l2 49152 1 qeth 131072 2 qeth_l3,qeth_l2 qdio 65536 3 qeth,qeth_l3,qeth_l2 ccwgroup 20480 1 qeth",
"modprobe qeth",
"cio_ignore -r read_device_bus_id,write_device_bus_id,data_device_bus_id",
"cio_ignore -r 0.0.f500,0.0.f501,0.0.f502",
"znetconf -u Scanning for network devices Device IDs Type Card Type CHPID Drv. ------------------------------------------------------------ 0.0.f500,0.0.f501,0.0.f502 1731/01 OSA (QDIO) 00 qeth 0.0.f503,0.0.f504,0.0.f505 1731/01 OSA (QDIO) 01 qeth 0.0.0400,0.0.0401,0.0.0402 1731/05 HiperSockets 02 qeth",
"znetconf -a f500 Scanning for network devices Successfully configured device 0.0.f500 (encf500)",
"znetconf -a f500 -o portname=myname Scanning for network devices Successfully configured device 0.0.f500 (encf500)",
"echo read_device_bus_id,write_device_bus_id,data_device_bus_id > /sys/bus/ccwgroup/drivers/qeth/group",
"echo 0.0.f500,0.0.f501,0.0.f502 > /sys/bus/ccwgroup/drivers/qeth/group",
"ls /sys/bus/ccwgroup/drivers/qeth/0.0.f500",
"echo 1 > /sys/bus/ccwgroup/drivers/qeth/0.0.f500/online",
"cat /sys/bus/ccwgroup/drivers/qeth/0.0.f500/online 1",
"cat /sys/bus/ccwgroup/drivers/qeth/0.0.f500/if_name encf500",
"lsqeth encf500 Device name : encf500 ------------------------------------------------- card_type : OSD_1000 cdev0 : 0.0.f500 cdev1 : 0.0.f501 cdev2 : 0.0.f502 chpid : 76 online : 1 portname : OSAPORT portno : 0 state : UP (LAN ONLINE) priority_queueing : always queue 0 buffer_count : 16 layer2 : 1 isolation : none",
"cd /etc/NetworkManager/system-connections/ cp enc9a0.nmconnection enc600.nmconnection",
"lsqeth -p devices CHPID interface cardtype port chksum prio-q'ing rtr4 rtr6 lay'2 cnt -------------------------- ----- ---------------- -------------- ---- ------ ---------- ---- ---- ----- ----- 0.0.09a0/0.0.09a1/0.0.09a2 x00 enc9a0 Virt.NIC QDIO 0 sw always_q_2 n/a n/a 1 64 0.0.0600/0.0.0601/0.0.0602 x00 enc600 Virt.NIC QDIO 0 sw always_q_2 n/a n/a 1 64",
"[connection] type=ethernet interface-name=enc600 [ipv4] address1=10.12.20.136/24,10.12.20.1 dns=10.12.20.53; method=manual [ethernet] mac-address=00:53:00:8f:fa:66",
"chown root:root /etc/NetworkManager/system-connections/enc600.nmconnection",
"nmcli connection reload",
"nmcli connection show enc600",
"cio_ignore -r read_device_bus_id,write_device_bus_id,data_device_bus_id",
"cio_ignore -r 0.0.0600,0.0.0601,0.0.0602",
"echo add > /sys/bus/ccw/devices/read-channel/uevent",
"echo add > /sys/bus/ccw/devices/0.0.0600/uevent",
"lsqeth",
"[ipv4] address1=10.12.20.136/24,10.12.20.1 [ipv6] address1=2001:db8:1::1,2001:db8:1::fffe",
"nmcli connection up enc600",
"ip addr show enc600 3: enc600: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 3c:97:0e:51:38:17 brd ff:ff:ff:ff:ff:ff 10.12.20.136/24 brd 10.12.20.1 scope global dynamic enc600 valid_lft 81487sec preferred_lft 81487sec inet6 1574:12:5:1185:3e97:eff:fe51:3817/64 scope global noprefixroute dynamic valid_lft 2591994sec preferred_lft 604794sec inet6 fe45::a455:eff:d078:3847/64 scope link valid_lft forever preferred_lft forever",
"ip route default via 10.12.20.136 dev enc600 proto dhcp src",
"ping -c 1 10.12.20.136 PING 10.12.20.136 (10.12.20.136) 56(84) bytes of data. 64 bytes from 10.12.20.136: icmp_seq=0 ttl=63 time=8.07 ms",
"machine_id=USD(cat /etc/machine-id) kernel_version=USD(uname -r) ls /boot/loader/entries/USDmachine_id-USDkernel_version.conf",
"root=10.16.105.196:/nfs/nfs_root cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0,portname=OSAPORT ip=10.16.105.197:10.16.105.196:10.16.111.254:255.255.248.0:nfs‐server.subdomain.domain:enc9a0:none rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us",
"vmlinuz ... inst.debug",
"cd /tmp/pre-anaconda-logs/",
"dmesg",
"[ 170.171135] sd 5:0:0:0: [sdb] Attached SCSI removable disk",
"mkdir usb",
"mount /dev/sdb1 /mnt/usb",
"cd /mnt/usb",
"ls",
"cp /tmp/*log /mnt/usb",
"umount /mnt/usb",
"cd /tmp",
"scp *log user@address:path",
"scp *log [email protected]:/home/john/logs/",
"The authenticity of host '192.168.0.122 (192.168.0.122)' can't be established. ECDSA key fingerprint is a4:60:76:eb:b2:d0:aa:23:af:3d:59:5c:de:bb:c4:42. Are you sure you want to continue connecting (yes/no)?",
"curl --output directory-path/filename.iso 'new_copied_link_location' --continue-at -",
"sha256sum rhel-x.x-x86_64-dvd.iso `85a...46c rhel-x.x-x86_64-dvd.iso`",
"curl --output _rhel-x.x-x86_64-dvd.iso 'https://access.cdn.redhat.com//content/origin/files/sha256/85/85a...46c/rhel-x.x-x86_64-dvd.iso?_auth =141...963' --continue-at -",
"grubby --default-kernel /boot/vmlinuz-4.18.0-94.el8.x86_64",
"grubby --remove-args=\"rhgb\" --update-kernel /boot/vmlinuz-4.18.0-94.el8.x86_64",
"df -h",
"Filesystem Size Used Avail Use% Mounted on devtmpfs 396M 0 396M 0% /dev tmpfs 411M 0 411M 0% /dev/shm tmpfs 411M 6.7M 405M 2% /run tmpfs 411M 0 411M 0% /sys/fs/cgroup /dev/mapper/rhel-root 17G 4.1G 13G 25% / /dev/sda1 1014M 173M 842M 17% /boot tmpfs 83M 20K 83M 1% /run/user/42 tmpfs 83M 84K 83M 1% /run/user/1000 /dev/dm-4 90G 90G 0 100% /home",
"free -m",
"mem= xx M",
"free -m",
"grubby --update-kernel=ALL --args=\"mem= xx M\"",
"Enable=true",
"systemctl restart gdm.service",
"X :1 -query address",
"Xnest :1 -query address",
"inst.rescue inst.dd=driver_name",
"inst.rescue modprobe.blacklist=driver_name",
"The rescue environment will now attempt to find your Linux installation and mount it under the directory: /mnt/sysroot/. You can then make any changes required to your system. Choose 1 to proceed with this step. You can choose to mount your file systems read-only instead of read-write by choosing 2 . If for some reason this process does not work choose 3 to skip directly to a shell. 1) Continue 2) Read-only mount 3) Skip to shell 4) Quit (Reboot)",
"sh-4.2#",
"sh-4.2# chroot /mnt/sysroot",
"sh-4.2# mount -t xfs /dev/mapper/VolGroup00-LogVol02 /directory",
"sh-4.2# fdisk -l",
"sh-4.2# chroot /mnt/sysroot/",
"sh-4.2# sosreport",
"bash-4.2# ip addr add 10.13.153.64/23 dev eth0",
"sh-4.2# exit",
"sh-4.2# cp /mnt/sysroot/var/tmp/sosreport new_location",
"sh-4.2# scp /mnt/sysroot/var/tmp/sosreport username@hostname:sosreport",
"sh-4.2# chroot /mnt/sysroot/",
"sh-4.2# /sbin/grub2-install install_device",
"sh-4.2# chroot /mnt/sysroot/",
"sh-4.2# dnf install /root/drivers/xorg-x11-drv-wacom-0.23.0-6.el7.x86_64.rpm",
"sh-4.2# exit",
"sh-4.2# chroot /mnt/sysroot/",
"sh-4.2# dnf remove xorg-x11-drv-wacom",
"sh-4.2# exit",
"ip=192.168.1.15 netmask=255.255.255.0 gateway=192.168.1.254 nameserver=192.168.1.250 hostname=myhost1",
"ip=192.168.1.15::192.168.1.254:255.255.255.0:myhost1::none: nameserver=192.168.1.250",
"inst.xtimeout= N",
"[ ...] rootfs image is not initramfs",
"sha256sum dvd/images/pxeboot/initrd.img fdb1a70321c06e25a1ed6bf3d8779371b768d5972078eb72b2c78c925067b5d8 dvd/images/pxeboot/initrd.img",
"grep sha256 dvd/.treeinfo images/efiboot.img = sha256: d357d5063b96226d643c41c9025529554a422acb43a4394e4ebcaa779cc7a917 images/install.img = sha256: 8c0323572f7fc04e34dd81c97d008a2ddfc2cfc525aef8c31459e21bf3397514 images/pxeboot/initrd.img = sha256: fdb1a70321c06e25a1ed6bf3d8779371b768d5972078eb72b2c78c925067b5d8 images/pxeboot/vmlinuz = sha256: b9510ea4212220e85351cbb7f2ebc2b1b0804a6d40ccb93307c165e16d1095db",
"[ ...] No filesystem could mount root, tried: [ ...] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(1,0) [ ...] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.14.0-55.el9.s390x #1 [ ...] [ ...] Call Trace: [ ...] ([<...>] show_trace+0x.../0x...) [ ...] [<...>] show_stack+0x.../0x [ ...] [<...>] panic+0x.../0x [ ...] [<...>] mount_block_root+0x.../0x [ ...] [<...>] prepare_namespace+0x.../0x [ ...] [<...>] kernel_init_freeable+0x.../0x [ ...] [<...>] kernel_init+0x.../0x [ ...] [<...>] kernel_thread_starter+0x.../0x [ ...] [<...>] kernel_thread_starter+0x.../0x...",
"inst.stage2=https://hostname/path_to_install_image/ inst.noverifyssl",
"inst.repo=https://hostname/path_to_install_repository/ inst.noverifyssl",
"inst.stage2.all inst.stage2=http://hostname1/path_to_install_tree/ inst.stage2=http://hostname2/path_to_install_tree/ inst.stage2=http://hostname3/path_to_install_tree/",
"[PROTOCOL://][USERNAME[:PASSWORD]@]HOST[:PORT]",
"inst.nosave=Input_ks,logs",
"ifname=eth0:01:23:45:67:89:ab",
"vlan=vlan5:enp0s1",
"bond=bond0:enp0s1,enp0s2:mode=active-backup,tx_queues=32,downdelay=5000",
"team=team0:enp0s1,enp0s2",
"bridge=bridge0:enp0s1,enp0s2",
"modprobe.blacklist=ahci,firewire_ohci",
"modprobe.blacklist=virtio_blk"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html-single/interactively_installing_rhel_over_the_network/index |
4.316. systemtap | 4.316. systemtap 4.316.1. RHSA-2012:0376 - Moderate: systemtap security update Updated systemtap packages that fix one security issue are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. SystemTap is an instrumentation system for systems running the Linux kernel. The system allows developers to write scripts to collect data on the operation of the system. Security Fix CVE-2012-0875 An invalid pointer read flaw was found in the way SystemTap handled malformed debugging information in DWARF format. When SystemTap unprivileged mode was enabled, an unprivileged user in the stapusr group could use this flaw to crash the system or, potentially, read arbitrary kernel memory. Additionally, a privileged user (root, or a member of the stapdev group) could trigger this flaw when tricked into instrumenting a specially-crafted ELF binary, even when unprivileged mode was not enabled. SystemTap users should upgrade to these updated packages, which contain a backported patch to correct this issue. 4.316.2. RHBA-2011:1517 - systemtap bug fix and enhancement update Updated systemtap packages that fix several bugs and adds various enhancements are now available for Red Hat Enterprise Linux 6. SystemTap is a tracing and probing tool that allows users to study and monitor the activities of the operating system (particularly, the kernel). It provides information similar to the output of tools like netstat, ps, top, and iostat; however, SystemTap provides more filtering and analysis options. BZ# 683483 The systemtap package has been upgraded to upstream version 1.6, which provides a number of bug fixes and enhancements: SystemTap now handles kernel modules with "-" in the name such as "i2c-core" properly. The process().mark() function now supports the USDUSDparms log parameter for reading probe parameters. The operation of the compile-server and client was improved and simplified: The compile server can cache the script build results. The compile server and client communicate exchange version info to adjust the communication protocol and use the newest version of the compile server. The following deprecated tools were removed: stap-client, stap-authorize-server-cert, stap-authorize-signing-cert, stap-find-or-start-server, and stap-find-servers. For remote execution, the "--remote USER@HOST" functionality can now be specified multiple times, and will automatically build the script for distinct kernel and architecture configurations and run it on all named machines at once. The staprun program allows to run multiple instances of the same script at the same time. Bug Fixes BZ# 711757 When the system ran out of memory in a module, SystemTap failed to remove the module directory created by the debugfs debugger. Consequently, SystemTap was unable to load the module until after a system reboot and a kernel panic could occur. With this update, SystemTap removes the module directory properly. BZ# 708255 The rate of error or warning messages, sent from the probe module to user-space, caused various buffer overflows under certain circumstances. As a result, the messages could be lost and caused a kernel panic. With this update, these messages are also delivered if there is an excessive amount of warning or error messages. BZ# 732346 SystemTap uses the make command to search for available tracepoints ('kernel.trace("*")'). Previously, the make operation printed out a number of misleading error messages on the first tracepoint search. Because search results are cached, the error messages were produced only on the first search. With this update, the tracepoint search uses the "make -i" (ignore errors) operation, the error messages from the make operations are supressed, and no misleading output is printed when performing tracepoint searches. BZ# 737095 Prior to this update, starting a SystemTap server could fail. This occurred because the getlogin() function returned a NULL pointer when initializing an std::string object. With this update, the underlying code has been modified and the problem no longer occurs. BZ# 692970 The sdt.c tests in the testsuite were attempting to include the unavailable sdt_types.h header file, and the compilation of the systemtap-testsuite package failed. With this update, the sdt.c tests no longer need the sdt_types.h file and system-testsuite is compiled as expected. BZ# 718151 The unprivileged probing of user-space applications was not documented properly. This update adds information to the manual pages on unprivileged probing of user-space applications. BZ# 691763 On some architectures, SystemTap could not find certain functions to probe or variables to fetch in highly optimized code. As a consequence, the cxxclass.exp test script failed on PowerPC. This update adds the "-DSTAP_SDT_ARG_CONSTRAINT=nr" option to the cxxclass.exp test script. With this option the arguments are accessible to SystemTap and the test works as expected on PowerPC. BZ# 692445 The stap_run.exp test script incorrectly matched the end of the file and the tests using the script were reported as failed on IBM System z although the tests succeeded. With this update, the end-of-line matching has been fixed and the tests on IBM System z pass as expected. BZ# 723536 The stap command with the "--remote" parameter failed to properly copy the systemtap_session object on Intel x86, PowerPC and IBM System z architectures. Consequently, instrumentation failed to be built and the utility terminated with a segmentation fault. Now, the underlying code copies the systemtap_session object properly and the stap command builds instrumentation as expected. BZ# 723541 The stap command with the "--remote" parameter did not use the proper canonical names on Intel x86, PowerPC and IBM System z architectures. Consequently, instrumentation failed to built. With this update, the underlying session code has been modified to properly copy kernel/arch information and the stap command builds instrumentation correctly in such scenarios. BZ# 738242 On startup, SystemTap needs to allocate memory to individual CPUs for the data transport layer. Previously, SystemTap used the number of all potential CPUs instead of online (schedulable) CPUs to estimate the required memory. Consequently, SystemTap could incorrectly indicate that there was not enough memory available on startup and exit. With this update, the underlying code has been modified and SystemTap now uses the number of online CPUs to estimate the memory needed for data transport layer. BZ# 685064 When probing a user-space module, SystemTap needs the uprobe.ko kernel module. If the module does not exist, it is built. Previously, this could lead to a race condition if multiple scripts probing user-space applications attempted to built the missing uprobes.ko kernel module. The parallel builds of uprobes.ko could interfere with each other and cause the scripts to fail. With this update, the stap command caches the resulting uprobe.ko kernel module to avoid interfering with SystemTap instrumentation instances. Multiple SystemTap scripts probing user-space applications now no longer interfere with each other regardless whether a uprobe.ko kernel module exists. BZ# 684227 The futexes.stp example script in the SystemTap Beginners Guide did not mask out the FUTEX_PRIVATE_FLAG and FUTEX_CLOCK_REALTIME flags in newer versions of the Linux Kernel. Due to this issue, the script could not return any results. The script has been modified and returns the correct results. BZ# 671235 The cmdline_string() function concatenates command line arguments. However, if an empty argument ("") was used, the function did not print any arguments following the NULL argument. The handling of an empty argument was added to the definition of the cmdline_string() function and the function now prints all arguments from the command line including the arguments following the empty argument. BZ# 607161 There were differences between the tracepoints (kernel.trace(mm_*")) in Red Hat Enterprise Linux and upstream kernels. Due to the differences, the building and running of the mmanonpage and mmfilepage tests failed. This update adds checks which verify the tracepoints' location before running the tests and testsuite only uses the tracepoints on kernels that actually have the tracepoints. BZ# 639344 The syscall wrapper functions used on IBM System z converted arguments from pointers to long integers. The usymbols test assumed the arguments were pointers and the test failed. This update modifies the usymbols test in the SystemTap testsuite package and the utility handles the arguments properly. BZ# 730884 The kernel header files could contain mutually conflicting definitions for the tracepoints of the perf tool. However, SystemTap does not detect kernel headers with conflicting definitions. As the SystemTap translator built the module in one compile unit, the translator tool did not detect the tracepoints. The SystemTap translator now builds instrumention so that the kernel header files with the tracepoints are compiled in seperate units and the tracepoints for jbd and ext3 subsystems are now visible to SystemTap. BZ# 733181 The sdt.exp script ran some tests with -DEXPERIMENTAL_KPROBE_SDT, an obsolete mechanism. On some architectures the tests failed. The sdt.exp script now marks the problematic -DEXPERIMENTAL_KPROBE_SDT that fail as XFAIL. Also, the result of the tests using -DEXPERIMENTAL_KPROBE_SDT that fail are no longer included in the FAIL results. BZ# 733182 The sdt.exp script ran some tests with STAP_SDT_V2, an obsolete mechanism. On some architectures the tests failed because of ambiguities between numeric constants and register numbers. The sdt.exp script now marks the problematic STAP_SDT_V2 that fail as XFAIL. The result of the tests using STAP_SDT_V2 that fail are no longer included in the FAIL results. BZ# 730167 Probing a user-space application requires the uprobes kernel module. Previously, the "--remote" option used the path to the uprobes kernel module on the client instead of the proper path on a remote machine. If a uprobes module was not yet loaded on the remote machine, the instrumentation did not start and the following error message was printed: SystemTap now loads the correct uprobe module and user-space application probing works as expected with the "--remote" option even if the uprobe kernel module is not currently loaded on remote machine. BZ# 731288 The sdt_misc.exp test did not take into account that some code in sdt_types.c was compiled conditionally, which caused that the number of probe points available could change. As a consequence, the sdt_misc.exp test failed on 32-bit and 64-bit AMD architectures because the number of probe points available did not match the expected value. With this update, the sdt_misc.exp test now takes into account the conditionally compiled code in sdt_types.c when deciding the number of probe points and the test passes as expected. Enhancements BZ# 600400 In the verbose mode, the stap-client utility now shows progress information of the communication with the server. Users of systemtap are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | [
"ERROR: Unable to canonicalize path [...]: No such file or directory"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/systemtap |
14.2. ID Range Assignments During Installation | 14.2. ID Range Assignments During Installation During server installation, the ipa-server-install command by default automatically assigns a random current ID range to the installed server. The setup script randomly selects a range of 200,000 IDs from a total of 10,000 possible ranges. Selecting a random range in this way significantly reduces the probability of conflicting IDs in case you decide to merge two separate IdM domains in the future. However, you can define a current ID range manually during server installation by using the following two options with ipa-server-install : --idstart gives the starting value for UID and GID numbers; by default, the value is selected at random, --idmax gives the maximum UID and GID number; by default, the value is the --idstart starting value plus 199,999. If you have a single IdM server installed, a new user or group entry receives a random ID from the whole range. When you install a new replica and the replica requests its own ID range, the initial ID range for the server splits and is distributed between the server and replica: the replica receives half of the remaining ID range that is available on the initial master. The server and replica then use their respective portions of the original ID range for new entries. Also, if less than 100 IDs from the ID range that was assigned to a replica remain, meaning the replica is close to depleting its allocated ID range, the replica contacts the other available servers with a request for a new ID range. A server receives an ID range the first time the DNA plug-in is used; until then, the server has no ID range defined. For example, when you create a replica from a master server, the replica does not receive an ID range immediately. The replica requests an ID range from the initial master only when the first ID is about to be assigned on the replica. Note If the initial master stops functioning before the replica requests an ID range from it, the replica is unable to contact the master with a request for the ID range. An attempt to add a new user on the replica fails. In such situations, you can find out what ID range is assigned to the disabled master and assign an ID range to the replica manually, which is described in Section 14.5, "Manual ID Range Extension and Assigning a New ID Range" . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/id-ranges-at-install |
Chapter 2. Planning a Distributed Compute Node (DCN) deployment | Chapter 2. Planning a Distributed Compute Node (DCN) deployment When you plan your DCN architecture, check that the technologies that you need are available and supported. 2.1. Considerations for storage on DCN architecture The following features are not currently supported for DCN architectures: Copying a volume snapshot between edge sites. You can work around this by creating an image from the volume and using glance to copy the image. After the image is copied, you can create a volume from it. Ceph Rados Gateway (RGW) at the edge. CephFS at the edge. Instance high availability (HA) at the edge sites. RBD mirroring between sites. Instance migration, live or cold, either between edge sites, or from the central location to edge sites. You can still migrate instances within a site boundary. To move an image between sites, you must snapshot the image, and use glance image-import . For more information see Confirming image snapshots can be created and copied between sites . Additionally, you must consider the following: You must upload images to the central location before copying them to edge sites; a copy of each image must exist in the Image service (glance) at the central location. You must use the RBD storage driver for the Image, Compute and Block Storage services. For each site, assign a unique availability zone, and use the same value for the NovaComputeAvailabilityZone and CinderStorageAvailabilityZone parameters. You can migrate an offline volume from an edge site to the central location, or vice versa. You cannot migrate volumes directly between edge sites. 2.2. Considerations for networking on DCN architecture The following features are not currently supported for DCN architectures: DHCP on DPDK nodes Conntrack for TC Flower Hardware Offload Conntrack for TC Flower Hardware Offload is available on DCN as a Technology Preview, and therefore using these solutions together is not fully supported by Red Hat. This feature should only be used with DCN for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details. The following ML2/OVS technologies are fully supported: OVS-DPDK without DHCP on the DPDK nodes SR-IOV TC flower hardware offload, without conntrack Neutron availability zones (AZs) with networker nodes at the edge, with one AZ per site Routed provider networks The following ML2/OVN networking technologies are fully supported: OVS-DPDK without DHCP on the DPDK nodes SR-IOV (without DHCP) TC flower hardware offload, without conntrack Routed provider networks OVN GW (networker node) with Neutron AZs supported Important Ensure that all router gateway ports reside on the OpenStack Controller nodes by setting OVNCMSOptions: 'enable-chassis-as-gw' and by providing one or more AZ values for the OVNAvailabilityZone parameter. Performing these actions prevent the routers from scheduling all chassis as potential hosts for the router gateway ports. For more information, see Configuring Network service availability zones with ML2/OVN in Configuring Red Hat OpenStack Platform networking . Additionally, you must consider the following: Network latency: Balance the latency as measured in round-trip time (RTT), with the expected number of concurrent API operations to maintain acceptable performance. Maximum TCP/IP throughput is inversely proportional to RTT. You can mitigate some issues with high-latency connections with high bandwidth by tuning kernel TCP parameters.Contact Red Hat support if a cross-site communication exceeds 100 ms. Network drop outs: If the edge site temporarily loses connection to the central site, then no OpenStack control plane API or CLI operations can be executed at the impacted edge site for the duration of the outage. For example, Compute nodes at the edge site are consequently unable to create a snapshot of an instance, issue an auth token, or delete an image. General OpenStack control plane API and CLI operations remain functional during this outage, and can continue to serve any other edge sites that have a working connection. Image type: You must use raw images when deploying a DCN architecture with Ceph storage. Image sizing: Overcloud node images - overcloud node images are downloaded from the central undercloud node. These images are potentially large files that will be transferred across all necessary networks from the central site to the edge site during provisioning. Instance images: If there is no block storage at the edge, then the Image service images traverse the WAN during first use. The images are copied or cached locally to the target edge nodes for all subsequent use. There is no size limit for glance images. Transfer times vary with available bandwidth and network latency. If there is block storage at the edge, then the image is copied over the WAN asynchronously for faster boot times at the edge. Provider networks: This is the recommended networking approach for DCN deployments. If you use provider networks at remote sites, then you must consider that the Networking service (neutron) does not place any limits or checks on where you can attach available networks. For example, if you use a provider network only in edge site A, you must ensure that you do not try to attach to the provider network in edge site B. This is because there are no validation checks on the provider network when binding it to a Compute node. Site-specific networks: A limitation in DCN networking arises if you use networks that are specific to a certain site: When you deploy centralized neutron controllers with Compute nodes, there are no triggers in neutron to identify a certain Compute node as a remote node. Consequently, the Compute nodes receive a list of other Compute nodes and automatically form tunnels between each other; the tunnels are formed from edge to edge through the central site. If you use VXLAN or Geneve, every Compute node at every site forms a tunnel with every other Compute node and Controller node, whether or not they are local or remote. This is not an issue if you are using the same neutron networks everywhere. When you use VLANs, neutron expects that all Compute nodes have the same bridge mappings, and that all VLANs are available at every site. Additional sites: If you need to expand from a central site to additional remote sites, you can use the openstack CLI on Red Hat OpenStack Platform director to add new network segments and subnets. If edge servers are not pre-provisioned, you must configure DHCP relay for introspection and provisioning on routed segments. Routing must be configured either on the cloud or within the networking infrastructure that connects each edge site to the hub. You should implement a networking design that allocates an L3 subnet for each Red Hat OpenStack Platform cluster network (external, internal API, and so on), unique to each site. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_a_distributed_compute_node_dcn_architecture/planning_a_distributed_compute_node_dcn_deployment |
Image APIs | Image APIs OpenShift Container Platform 4.16 Reference guide for image APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/image_apis/index |
B.3.2. Verifying Signature of Packages | B.3.2. Verifying Signature of Packages To check the GnuPG signature of an RPM file after importing the builder's GnuPG key, use the following command (replace <rpm-file> with the file name of the RPM package): If all goes well, the following message is displayed: md5 gpg OK . This means that the signature of the package has been verified, that it is not corrupt, and therefore is safe to install and use. | [
"-K <rpm-file>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-keys-checking |
Chapter 3. Migrating from Jenkins to OpenShift Pipelines or Tekton | Chapter 3. Migrating from Jenkins to OpenShift Pipelines or Tekton You can migrate your CI/CD workflows from Jenkins to Red Hat OpenShift Pipelines , a cloud-native CI/CD experience based on the Tekton project. 3.1. Comparison of Jenkins and OpenShift Pipelines concepts You can review and compare the following equivalent terms used in Jenkins and OpenShift Pipelines. 3.1.1. Jenkins terminology Jenkins offers declarative and scripted pipelines that are extensible using shared libraries and plugins. Some basic terms in Jenkins are as follows: Pipeline : Automates the entire process of building, testing, and deploying applications by using Groovy syntax. Node : A machine capable of either orchestrating or executing a scripted pipeline. Stage : A conceptually distinct subset of tasks performed in a pipeline. Plugins or user interfaces often use this block to display the status or progress of tasks. Step : A single task that specifies the exact action to be taken, either by using a command or a script. 3.1.2. OpenShift Pipelines terminology OpenShift Pipelines uses YAML syntax for declarative pipelines and consists of tasks. Some basic terms in OpenShift Pipelines are as follows: Pipeline : A set of tasks in a series, in parallel, or both. Task : A sequence of steps as commands, binaries, or scripts. PipelineRun : Execution of a pipeline with one or more tasks. TaskRun : Execution of a task with one or more steps. Note You can initiate a PipelineRun or a TaskRun with a set of inputs such as parameters and workspaces, and the execution results in a set of outputs and artifacts. Workspace : In OpenShift Pipelines, workspaces are conceptual blocks that serve the following purposes: Storage of inputs, outputs, and build artifacts. Common space to share data among tasks. Mount points for credentials held in secrets, configurations held in config maps, and common tools shared by an organization. Note In Jenkins, there is no direct equivalent of OpenShift Pipelines workspaces. You can think of the control node as a workspace, as it stores the cloned code repository, build history, and artifacts. When a job is assigned to a different node, the cloned code and the generated artifacts are stored in that node, but the control node maintains the build history. 3.1.3. Mapping of concepts The building blocks of Jenkins and OpenShift Pipelines are not equivalent, and a specific comparison does not provide a technically accurate mapping. The following terms and concepts in Jenkins and OpenShift Pipelines correlate in general: Table 3.1. Jenkins and OpenShift Pipelines - basic comparison Jenkins OpenShift Pipelines Pipeline Pipeline and PipelineRun Stage Task Step A step in a task 3.2. Migrating a sample pipeline from Jenkins to OpenShift Pipelines You can use the following equivalent examples to help migrate your build, test, and deploy pipelines from Jenkins to OpenShift Pipelines. 3.2.1. Jenkins pipeline Consider a Jenkins pipeline written in Groovy for building, testing, and deploying: pipeline { agent any stages { stage('Build') { steps { sh 'make' } } stage('Test'){ steps { sh 'make check' junit 'reports/**/*.xml' } } stage('Deploy') { steps { sh 'make publish' } } } } 3.2.2. OpenShift Pipelines pipeline To create a pipeline in OpenShift Pipelines that is equivalent to the preceding Jenkins pipeline, you create the following three tasks: Example build task YAML definition file apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-build spec: workspaces: - name: source steps: - image: my-ci-image command: ["make"] workingDir: USD(workspaces.source.path) Example test task YAML definition file apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-test spec: workspaces: - name: source steps: - image: my-ci-image command: ["make check"] workingDir: USD(workspaces.source.path) - image: junit-report-image script: | #!/usr/bin/env bash junit-report reports/**/*.xml workingDir: USD(workspaces.source.path) Example deploy task YAML definition file apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myprojectd-deploy spec: workspaces: - name: source steps: - image: my-deploy-image command: ["make deploy"] workingDir: USD(workspaces.source.path) You can combine the three tasks sequentially to form a pipeline in OpenShift Pipelines: Example: OpenShift Pipelines pipeline for building, testing, and deployment apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: myproject-pipeline spec: workspaces: - name: shared-dir tasks: - name: build taskRef: name: myproject-build workspaces: - name: source workspace: shared-dir - name: test taskRef: name: myproject-test workspaces: - name: source workspace: shared-dir - name: deploy taskRef: name: myproject-deploy workspaces: - name: source workspace: shared-dir 3.3. Migrating from Jenkins plugins to Tekton Hub tasks You can extend the capability of Jenkins by using plugins . To achieve similar extensibility in OpenShift Pipelines, use any of the tasks available from Tekton Hub . For example, consider the git-clone task in Tekton Hub, which corresponds to the git plugin for Jenkins. Example: git-clone task from Tekton Hub apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: demo-pipeline spec: params: - name: repo_url - name: revision workspaces: - name: source tasks: - name: fetch-from-git taskRef: name: git-clone params: - name: url value: USD(params.repo_url) - name: revision value: USD(params.revision) workspaces: - name: output workspace: source 3.4. Extending OpenShift Pipelines capabilities using custom tasks and scripts In OpenShift Pipelines, if you do not find the right task in Tekton Hub, or need greater control over tasks, you can create custom tasks and scripts to extend the capabilities of OpenShift Pipelines. Example: A custom task for running the maven test command apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: maven-test spec: workspaces: - name: source steps: - image: my-maven-image command: ["mvn test"] workingDir: USD(workspaces.source.path) Example: Run a custom shell script by providing its path ... steps: image: ubuntu script: | #!/usr/bin/env bash /workspace/my-script.sh ... Example: Run a custom Python script by writing it in the YAML file ... steps: image: python script: | #!/usr/bin/env python3 print("hello from python!") ... 3.5. Comparison of Jenkins and OpenShift Pipelines execution models Jenkins and OpenShift Pipelines offer similar functions but are different in architecture and execution. Table 3.2. Comparison of execution models in Jenkins and OpenShift Pipelines Jenkins OpenShift Pipelines Jenkins has a controller node. Jenkins runs pipelines and steps centrally, or orchestrates jobs running in other nodes. OpenShift Pipelines is serverless and distributed, and there is no central dependency for execution. Containers are launched by the Jenkins controller node through the pipeline. OpenShift Pipelines adopts a 'container-first' approach, where every step runs as a container in a pod (equivalent to nodes in Jenkins). Extensibility is achieved by using plugins. Extensibility is achieved by using tasks in Tekton Hub or by creating custom tasks and scripts. 3.6. Examples of common use cases Both Jenkins and OpenShift Pipelines offer capabilities for common CI/CD use cases, such as: Compiling, building, and deploying images using Apache Maven Extending the core capabilities by using plugins Reusing shareable libraries and custom scripts 3.6.1. Running a Maven pipeline in Jenkins and OpenShift Pipelines You can use Maven in both Jenkins and OpenShift Pipelines workflows for compiling, building, and deploying images. To map your existing Jenkins workflow to OpenShift Pipelines, consider the following examples: Example: Compile and build an image and deploy it to OpenShift using Maven in Jenkins #!/usr/bin/groovy node('maven') { stage 'Checkout' checkout scm stage 'Build' sh 'cd helloworld && mvn clean' sh 'cd helloworld && mvn compile' stage 'Run Unit Tests' sh 'cd helloworld && mvn test' stage 'Package' sh 'cd helloworld && mvn package' stage 'Archive artifact' sh 'mkdir -p artifacts/deployments && cp helloworld/target/*.war artifacts/deployments' archive 'helloworld/target/*.war' stage 'Create Image' sh 'oc login https://kubernetes.default -u admin -p admin --insecure-skip-tls-verify=true' sh 'oc new-project helloworldproject' sh 'oc project helloworldproject' sh 'oc process -f helloworld/jboss-eap70-binary-build.json | oc create -f -' sh 'oc start-build eap-helloworld-app --from-dir=artifacts/' stage 'Deploy' sh 'oc new-app helloworld/jboss-eap70-deploy.json' } Example: Compile and build an image and deploy it to OpenShift using Maven in OpenShift Pipelines. apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: maven-pipeline spec: workspaces: - name: shared-workspace - name: maven-settings - name: kubeconfig-dir optional: true params: - name: repo-url - name: revision - name: context-path tasks: - name: fetch-repo taskRef: name: git-clone workspaces: - name: output workspace: shared-workspace params: - name: url value: "USD(params.repo-url)" - name: subdirectory value: "" - name: deleteExisting value: "true" - name: revision value: USD(params.revision) - name: mvn-build taskRef: name: maven runAfter: - fetch-repo workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: "USD(params.context-path)" - name: GOALS value: ["-DskipTests", "clean", "compile"] - name: mvn-tests taskRef: name: maven runAfter: - mvn-build workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: "USD(params.context-path)" - name: GOALS value: ["test"] - name: mvn-package taskRef: name: maven runAfter: - mvn-tests workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: "USD(params.context-path)" - name: GOALS value: ["package"] - name: create-image-and-deploy taskRef: name: openshift-client runAfter: - mvn-package workspaces: - name: manifest-dir workspace: shared-workspace - name: kubeconfig-dir workspace: kubeconfig-dir params: - name: SCRIPT value: | cd "USD(params.context-path)" mkdir -p ./artifacts/deployments && cp ./target/*.war ./artifacts/deployments oc new-project helloworldproject oc project helloworldproject oc process -f jboss-eap70-binary-build.json | oc create -f - oc start-build eap-helloworld-app --from-dir=artifacts/ oc new-app jboss-eap70-deploy.json 3.6.2. Extending the core capabilities of Jenkins and OpenShift Pipelines by using plugins Jenkins has the advantage of a large ecosystem of numerous plugins developed over the years by its extensive user base. You can search and browse the plugins in the Jenkins Plugin Index . OpenShift Pipelines also has many tasks developed and contributed by the community and enterprise users. A publicly available catalog of reusable OpenShift Pipelines tasks are available in the Tekton Hub . In addition, OpenShift Pipelines incorporates many of the plugins of the Jenkins ecosystem within its core capabilities. For example, authorization is a critical function in both Jenkins and OpenShift Pipelines. While Jenkins ensures authorization using the Role-based Authorization Strategy plugin, OpenShift Pipelines uses OpenShift's built-in Role-based Access Control system. 3.6.3. Sharing reusable code in Jenkins and OpenShift Pipelines Jenkins shared libraries provide reusable code for parts of Jenkins pipelines. The libraries are shared between Jenkinsfiles to create highly modular pipelines without code repetition. Although there is no direct equivalent of Jenkins shared libraries in OpenShift Pipelines, you can achieve similar workflows by using tasks from the Tekton Hub in combination with custom tasks and scripts. 3.7. Additional resources Understanding OpenShift Pipelines Role-based Access Control | [
"pipeline { agent any stages { stage('Build') { steps { sh 'make' } } stage('Test'){ steps { sh 'make check' junit 'reports/**/*.xml' } } stage('Deploy') { steps { sh 'make publish' } } } }",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-build spec: workspaces: - name: source steps: - image: my-ci-image command: [\"make\"] workingDir: USD(workspaces.source.path)",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-test spec: workspaces: - name: source steps: - image: my-ci-image command: [\"make check\"] workingDir: USD(workspaces.source.path) - image: junit-report-image script: | #!/usr/bin/env bash junit-report reports/**/*.xml workingDir: USD(workspaces.source.path)",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myprojectd-deploy spec: workspaces: - name: source steps: - image: my-deploy-image command: [\"make deploy\"] workingDir: USD(workspaces.source.path)",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: myproject-pipeline spec: workspaces: - name: shared-dir tasks: - name: build taskRef: name: myproject-build workspaces: - name: source workspace: shared-dir - name: test taskRef: name: myproject-test workspaces: - name: source workspace: shared-dir - name: deploy taskRef: name: myproject-deploy workspaces: - name: source workspace: shared-dir",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: demo-pipeline spec: params: - name: repo_url - name: revision workspaces: - name: source tasks: - name: fetch-from-git taskRef: name: git-clone params: - name: url value: USD(params.repo_url) - name: revision value: USD(params.revision) workspaces: - name: output workspace: source",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: maven-test spec: workspaces: - name: source steps: - image: my-maven-image command: [\"mvn test\"] workingDir: USD(workspaces.source.path)",
"steps: image: ubuntu script: | #!/usr/bin/env bash /workspace/my-script.sh",
"steps: image: python script: | #!/usr/bin/env python3 print(\"hello from python!\")",
"#!/usr/bin/groovy node('maven') { stage 'Checkout' checkout scm stage 'Build' sh 'cd helloworld && mvn clean' sh 'cd helloworld && mvn compile' stage 'Run Unit Tests' sh 'cd helloworld && mvn test' stage 'Package' sh 'cd helloworld && mvn package' stage 'Archive artifact' sh 'mkdir -p artifacts/deployments && cp helloworld/target/*.war artifacts/deployments' archive 'helloworld/target/*.war' stage 'Create Image' sh 'oc login https://kubernetes.default -u admin -p admin --insecure-skip-tls-verify=true' sh 'oc new-project helloworldproject' sh 'oc project helloworldproject' sh 'oc process -f helloworld/jboss-eap70-binary-build.json | oc create -f -' sh 'oc start-build eap-helloworld-app --from-dir=artifacts/' stage 'Deploy' sh 'oc new-app helloworld/jboss-eap70-deploy.json' }",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: maven-pipeline spec: workspaces: - name: shared-workspace - name: maven-settings - name: kubeconfig-dir optional: true params: - name: repo-url - name: revision - name: context-path tasks: - name: fetch-repo taskRef: name: git-clone workspaces: - name: output workspace: shared-workspace params: - name: url value: \"USD(params.repo-url)\" - name: subdirectory value: \"\" - name: deleteExisting value: \"true\" - name: revision value: USD(params.revision) - name: mvn-build taskRef: name: maven runAfter: - fetch-repo workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"-DskipTests\", \"clean\", \"compile\"] - name: mvn-tests taskRef: name: maven runAfter: - mvn-build workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"test\"] - name: mvn-package taskRef: name: maven runAfter: - mvn-tests workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"package\"] - name: create-image-and-deploy taskRef: name: openshift-client runAfter: - mvn-package workspaces: - name: manifest-dir workspace: shared-workspace - name: kubeconfig-dir workspace: kubeconfig-dir params: - name: SCRIPT value: | cd \"USD(params.context-path)\" mkdir -p ./artifacts/deployments && cp ./target/*.war ./artifacts/deployments oc new-project helloworldproject oc project helloworldproject oc process -f jboss-eap70-binary-build.json | oc create -f - oc start-build eap-helloworld-app --from-dir=artifacts/ oc new-app jboss-eap70-deploy.json"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/jenkins/migrating-from-jenkins-to-openshift-pipelines_images-other-jenkins-agent |
Chapter 9. Overview of NVMe over fabric devices | Chapter 9. Overview of NVMe over fabric devices Non-volatile Memory ExpressTM (NVMeTM) is an interface that allows host software utility to communicate with solid state drives. Use the following types of fabric transport to configure NVMe over fabric devices: NVMe over Remote Direct Memory Access (NVMe/RDMA) For information about how to configure NVMeTM/RDMA, see Configuring NVMe over fabrics using NVMe/RDMA . NVMe over Fibre Channel (NVMe/FC) For information about how to configure NVMeTM/FC, see Configuring NVMe over fabrics using NVMe/FC . NVMe over TCP (NVMe/TCP) For information about how to configure NVMe/FC, see Configuring NVMe over fabrics using NVMe/TCP . When using NVMe over fabrics, the solid-state drive does not have to be local to your system; it can be configured remotely through a NVMe over fabrics devices. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_storage_devices/overview-of-nvme-over-fabric-devices_managing-storage-devices |
Chapter 1. Preparing to install on Nutanix | Chapter 1. Preparing to install on Nutanix Before you install an OpenShift Container Platform cluster, be sure that your Nutanix environment meets the following requirements. 1.1. Nutanix version requirements You must install the OpenShift Container Platform cluster to a Nutanix environment that meets the following requirements. Table 1.1. Version requirements for Nutanix virtual environments Component Required version Nutanix AOS 6.5.2.7 or later Prism Central pc.2022.6 or later 1.2. Environment requirements Before you install an OpenShift Container Platform cluster, review the following Nutanix AOS environment requirements. 1.2.1. Required account privileges The installation program requires access to a Nutanix account with the necessary permissions to deploy the cluster and to maintain the daily operation of it. The following options are available to you: You can use a local Prism Central user account with administrative privileges. Using a local account is the quickest way to grant access to an account with the required permissions. If your organization's security policies require that you use a more restrictive set of permissions, use the permissions that are listed in the following table to create a custom Cloud Native role in Prism Central. You can then assign the role to a user account that is a member of a Prism Central authentication directory. Consider the following when managing this user account: When assigning entities to the role, ensure that the user can access only the Prism Element and subnet that are required to deploy the virtual machines. Ensure that the user is a member of the project to which it needs to assign virtual machines. For more information, see the Nutanix documentation about creating a Custom Cloud Native role , assigning a role , and adding a user to a project . Example 1.1. Required permissions for creating a Custom Cloud Native role Nutanix Object When required Required permissions in Nutanix API Description Categories Always Create_Category_Mapping Create_Or_Update_Name_Category Create_Or_Update_Value_Category Delete_Category_Mapping Delete_Name_Category Delete_Value_Category View_Category_Mapping View_Name_Category View_Value_Category Create, read, and delete categories that are assigned to the OpenShift Container Platform machines. Images Always Create_Image Delete_Image View_Image Create, read, and delete the operating system images used for the OpenShift Container Platform machines. Virtual Machines Always Create_Virtual_Machine Delete_Virtual_Machine View_Virtual_Machine Create, read, and delete the OpenShift Container Platform machines. Clusters Always View_Cluster View the Prism Element clusters that host the OpenShift Container Platform machines. Subnets Always View_Subnet View the subnets that host the OpenShift Container Platform machines. Projects If you will associate a project with compute machines, control plane machines, or all machines. View_Project View the projects defined in Prism Central and allow a project to be assigned to the OpenShift Container Platform machines. 1.2.2. Cluster limits Available resources vary between clusters. The number of possible clusters within a Nutanix environment is limited primarily by available storage space and any limitations associated with the resources that the cluster creates, and resources that you require to deploy the cluster, such a IP addresses and networks. 1.2.3. Cluster resources A minimum of 800 GB of storage is required to use a standard cluster. When you deploy a OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your Nutanix instance. Although these resources use 856 GB of storage, the bootstrap node is destroyed as part of the installation process. A standard OpenShift Container Platform installation creates the following resources: 1 label Virtual machines: 1 disk image 1 temporary bootstrap node 3 control plane nodes 3 compute machines 1.2.4. Networking requirements You must use either AHV IP Address Management (IPAM) or Dynamic Host Configuration Protocol (DHCP) for the network and ensure that it is configured to provide persistent IP addresses to the cluster machines. Additionally, create the following networking resources before you install the OpenShift Container Platform cluster: IP addresses DNS records Note It is recommended that each OpenShift Container Platform node in the cluster have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, an NTP server prevents errors typically associated with asynchronous server clocks. 1.2.4.1. Required IP Addresses An installer-provisioned installation requires two static virtual IP (VIP) addresses: A VIP address for the API is required. This address is used to access the cluster API. A VIP address for ingress is required. This address is used for cluster ingress traffic. You specify these IP addresses when you install the OpenShift Container Platform cluster. 1.2.4.2. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the Nutanix instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. If you use your own DNS or DHCP server, you must also create records for each node, including the bootstrap, control plane, and compute nodes. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 1.2. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 1.3. Configuring the Cloud Credential Operator utility The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). To install a cluster on Nutanix, you must set the CCO to manual mode as part of the installation process. To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Obtain the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. Additional resources Preparing to update a cluster with manually maintained credentials | [
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command."
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_nutanix/preparing-to-install-on-nutanix |
Chapter 2. Initializing InstructLab | Chapter 2. Initializing InstructLab You must initialize the InstructLab environments to begin working with the Red Hat Enterprise Linux AI models. 2.1. Creating your RHEL AI environment You can start interacting with LLM models and the RHEL AI tooling by initializing the InstructLab environment. Prerequisites You installed RHEL AI with the bootable container image. You have root user access on your machine. Procedure Optional: To set up training profiles, you need to know the GPU accelerators in your machine. You can view your system information by running the following command: USD ilab system info Initialize InstructLab by running the following command: USD ilab config init The CLI prompts you to setup your config.yaml . Example output Follow the CLI prompts to set up your training hardware configurations. This updates your config.yaml file and adds the proper train configurations for training an LLM model. Type the number of the YAML file that matches your hardware specifications. Important These profiles only add the necessary configurations to the train section of your config.yaml file, therefore any profile can be selected for inference serving a model. Example output of selecting training profiles Example output of a completed ilab config init run. You selected: A100_H100_x8.yaml Initialization completed successfully, you're ready to start using `ilab`. Enjoy! Configuring your system's GPU for inference serving: This step is only required if you are using Red Hat Enterprise Linux AI exclusively for inference serving. Edit your config.yaml file by running the following command: USD ilab config edit In the evaluate section of the configurations file, edit the gpus: parameter and add the number of accelerators on your machine. evaluate: base_branch: null base_model: ~/.cache/instructlab/models/granite-7b-starter branch: null gpus: <num-gpus> In the vllm section of the serve field in the configuration file, edit the gpus: and vllm_args: ["--tensor-parallel-size"] parameters and add the number of accelerators on your machine. serve: backend: vllm chat_template: auto host_port: 127.0.0.1:8000 llama_cpp: gpu_layers: -1 llm_family: '' max_ctx_size: 4096 model_path: ~/.cache/instructlab/models/granite-7b-redhat-lab vllm: llm_family: '' vllm_args: ["--tensor-parallel-size", "<num-gpus>"] gpus: <num-gpus> If you want to use the skeleton taxonomy tree, which includes two skills and one knowledge qna.yaml file, you can clone the skeleton repository and place it in the taxonomy directory by running the following command: rm -rf ~/.local/share/instructlab/taxonomy/ ; git clone https://github.com/RedHatOfficial/rhelai-sample-taxonomy.git ~/.local/share/instructlab/taxonomy/ Directory structure of the InstructLab environment 1 ~/.cache/instructlab/models/ : Contains all downloaded large language models, including the saved output of ones you generate with RHEL AI. 2 ~/.local/share/instructlab/datasets/ : Contains data output from the SDG phase, built on modifications to the taxonomy repository. 3 ~/.local/share/instructlab/taxonomy/ : Contains the skill and knowledge data. 4 ~/.local/share/instructlab/phased/<phase1-or-phase2>/checkpoints/ : Contains the output of the multi-phase training process Verification You can view the full config.yaml file by running the following command USD ilab config show You can also manually edit the config.yaml file by running the following command: USD ilab config edit | [
"ilab system info",
"ilab config init",
"Welcome to InstructLab CLI. This guide will help you to setup your environment. Please provide the following values to initiate the environment [press Enter for defaults]: Generating `/home/<example-user>/.config/instructlab/config.yaml` and `/home/<example-user>/.local/share/instructlab/internal/train_configuration/profiles`",
"Please choose a train profile to use: [0] No profile (CPU-only) [1] A100_H100_x2.yaml [2] A100_H100_x4.yaml [3] A100_H100_x8.yaml [4] L40_x4.yaml [5] L40_x8.yaml [6] L4_x8.yaml Enter the number of your choice [hit enter for the default CPU-only profile] [0]:",
"You selected: A100_H100_x8.yaml Initialization completed successfully, you're ready to start using `ilab`. Enjoy!",
"ilab config edit",
"evaluate: base_branch: null base_model: ~/.cache/instructlab/models/granite-7b-starter branch: null gpus: <num-gpus>",
"serve: backend: vllm chat_template: auto host_port: 127.0.0.1:8000 llama_cpp: gpu_layers: -1 llm_family: '' max_ctx_size: 4096 model_path: ~/.cache/instructlab/models/granite-7b-redhat-lab vllm: llm_family: '' vllm_args: [\"--tensor-parallel-size\", \"<num-gpus>\"] gpus: <num-gpus>",
"rm -rf ~/.local/share/instructlab/taxonomy/ ; git clone https://github.com/RedHatOfficial/rhelai-sample-taxonomy.git ~/.local/share/instructlab/taxonomy/",
"├─ ~/.cache/instructlab/models/ 1 ├─ ~/.local/share/instructlab/datasets 2 ├─ ~/.local/share/instructlab/taxonomy 3 ├─ ~/.local/share/instructlab/phased/<phase1-or-phase2>/checkpoints/ 4",
"ilab config show",
"ilab config edit"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.1/html/building_your_rhel_ai_environment/initializing_instructlab |
A.2. Reserved Keywords | A.2. Reserved Keywords Keyword Usage ADD add set option ALL standard aggregate function , function , query expression body , query term select clause , quantified comparison predicate ALTER alter , alter column options , alter options AND between predicate , boolean term ANY standard aggregate function , quantified comparison predicate ARRAY_AGG ordered aggregate function AS alter , array table , create procedure , option namespace , create table , create trigger , derived column , dynamic data statement , function , loop statement , xml namespace element , object table , select derived column , table subquery , text table , table name , with list element , xml serialize , xml table ASC sort specification ATOMIC compound statement , for each row trigger action BEGIN compound statement , for each row trigger action BETWEEN between predicate BIGDECIMAL data type BIGINT data type BIGINTEGER data type BLOB data type , xml serialize BOOLEAN data type BOTH function BREAK branching statement BY group by clause , order by clause , window specification BYTE data type CALL callable statement , call statement CASE case expression , searched case expression CAST function CHAR function , data type CLOB data type , xml serialize COLUMN alter column options CONSTRAINT create table body CONTINUE branching statement CONVERT function CREATE create procedure , create foreign temp table , create table , create temporary table , create trigger , procedure body definition CROSS cross join DATE data type DAY function DECIMAL data type DECLARE declare statement DEFAULT table element , xml namespace element , object table column , procedure parameter , xml table column DELETE alter , create trigger , delete statement DESC sort specification DISTINCT standard aggregate function , function , query expression body , query term , select clause DOUBLE data type DROP drop option , drop table EACH for each row trigger action ELSE case expression , if statement , searched case expression END case expression , compound statement , for each row trigger action , searched case expression ERROR raise error statement ESCAPE match predicate , text table EXCEPT query expression body EXEC dynamic data statement , call statement EXECUTE dynamic data statement , call statement EXISTS exists predicate FALSE non numeric literal FETCH fetch clause FILTER filter clause FLOAT data type FOR for each row trigger action , function , text aggregate function , xml table column FOREIGN alter options , create procedure , create foreign temp table , create table , foreign key FROM delete statement , from clause , function FULL qualified table FUNCTION create procedure GROUP group by clause HAVING having clause HOUR function IF if statement IMMEDIATE dynamic data statement IN procedure parameter , in predicate INNER qualified table INOUT procedure parameter INSERT alter , create trigger , function , insert statement INTEGER data type INTERSECT query term INTO dynamic data statement , insert statement , into clause IS is null predicate JOIN cross join , qualified table LANGUAGE object table LATERAL table subquery LEADING function LEAVE branching statement LEFT function , qualified table LIKE match predicate LIKE_REGEX like regex predicate LIMIT limit clause LOCAL create temporary table LONG data type LOOP loop statement MAKEDEP option clause , table primary MAKENOTDEP option clause , table primary MERGE insert statement MINUTE function MONTH function NO xml namespace element , text table column , text table NOCACHE option clause NOT between predicate , compound statement , table element , is null predicate , match predicate , boolean factor , procedure parameter , procedure result column , like regex predicate , in predicate , temporary table element NULL table element , is null predicate , non numeric literal , procedure parameter , procedure result column , temporary table element , xml query OBJECT data type OF alter , create trigger OFFSET limit clause ON alter , create foreign temp table , create trigger , loop statement , qualified table , xml query ONLY fetch clause OPTION option clause OPTIONS alter options list , options clause OR boolean value expression ORDER order by clause OUT procedure parameter OUTER qualified table OVER window specification PARAMETER alter column options PARTITION window specification PRIMARY table element , create temporary table , primary key PROCEDURE alter , alter options , create procedure , procedure body definition REAL data type REFERENCES foreign key RETURN assignment statement , return statement , data statement RETURNS create procedure RIGHT function , qualified table ROW fetch clause , for each row trigger action , limit clause , text table ROWS fetch clause , limit clause SECOND function SELECT select clause SET add set option , option namespace , update statement SHORT data type SIMILAR match predicate SMALLINT data type SOME standard aggregate function , quantified comparison predicate SQLEXCEPTION sql exception SQLSTATE sql exception SQLWARNING raise statement STRING dynamic data statement , data type , xml serialize TABLE alter options , create procedure , create foreign temp table , create table , create temporary table , drop table , query primary , table subquery TEMPORARY create foreign temp table , create temporary table THEN case expression , searched case expression TIME data type TIMESTAMP data type TINYINT data type TO match predicate TRAILING function TRANSLATE function TRIGGER alter , create trigger TRUE non numeric literal UNION cross join , query expression body UNIQUE other constraints , table element UNKNOWN non numeric literal UPDATE alter , create trigger , dynamic data statement , update statement USER function USING dynamic data statement VALUES insert statement VARBINARY data type , xml serialize VARCHAR data type , xml serialize VIRTUAL alter options , create procedure , create table , procedure body definition WHEN case expression , searched case expression WHERE filter clause , where clause WHILE while statement WITH assignment statement , query expression , data statement WITHOUT assignment statement , data statement XML data type XMLAGG ordered aggregate function XMLATTRIBUTES xml attributes XMLCOMMENT function XMLCONCAT function XMLELEMENT xml element XMLFOREST xml forest XMLNAMESPACES xml namespaces XMLPARSE xml parse XMLPI function XMLQUERY xml query XMLSERIALIZE xml serialize XMLTABLE xml table YEAR function | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/reserved_keywords |
Appendix B. Template writing reference | Appendix B. Template writing reference Embedded Ruby (ERB) is a tool for generating text files based on templates that combine plain text with Ruby code. Red Hat Satellite uses ERB syntax in the following cases: Provisioning templates For more information, see Creating Provisioning Templates in Provisioning hosts . Remote execution job templates For more information, see Chapter 12, Configuring and setting up remote jobs . Report templates For more information, see Chapter 10, Using report templates to monitor hosts . Templates for partition tables For more information, see Creating Partition Tables in Provisioning hosts . Smart Class Parameters For more information, see Configuring Puppet Smart Class Parameters in Managing configurations using Puppet integration . This section provides an overview of Satellite-specific macros and variables that can be used in ERB templates along with some usage examples. Note that the default templates provided by Red Hat Satellite ( Hosts > Templates > Provisioning Templates , Hosts > Templates > Job templates , Monitor > Reports > Report Templates ) also provide a good source of ERB syntax examples. When provisioning a host or running a remote job, the code in the ERB is executed and the variables are replaced with the host specific values. This process is referred to as rendering . Satellite Server has the safemode rendering option enabled by default, which prevents any harmful code being executed from templates. B.1. Accessing the template writing reference in the Satellite web UI You can access the template writing reference document in the Satellite web UI. Procedure Log in to the Satellite web UI. In the Satellite web UI, navigate to Administer > About . Click the Templates DSL link in the Support section. B.2. Using autocompletion in templates You can access a list of available macros and usage information in the template editor with the autocompletion option. This works for all templates within Satellite. Procedure In the Satellite web UI, navigate to either Hosts > Templates > Partition tables , Hosts > Templates > Provisioning Templates , or Hosts > Templates > Job templates . Click the settings icon at the top right corner of the template editor and select Autocompletion . Press Ctrl + Space in the template editor to access a list of all available macros. You can narrow down the list of macros by typing in more information about what you are looking for. For example, if you are looking for a method to list the ID of the content source for a host, you can type host and check the list of available macros for content source. A window to the dropdown provides a description of the macro, its usage, and the value it will return. When you find the method you are looking for, hit Enter to input the method. You can also enable Live Autocompletion in the settings menu to view a list of macros that match the pattern whenever you type something. However, this might input macros in unintended places, like package names in a provisioning template. B.3. Writing ERB templates The following tags are the most important and commonly used in ERB templates: <% %> All Ruby code is enclosed within <% %> in an ERB template. The code is executed when the template is rendered. It can contain Ruby control flow structures as well as Satellite-specific macros and variables. For example: Note that this template silently performs an action with a service and returns nothing at the output. <%= %> This provides the same functionality as <% %> but when the template is executed, the code output is inserted into the template. This is useful for variable substitution, for example: Example input: Example rendering: Example input: Example rendering: Note that if you enter an incorrect variable, no output is returned. However, if you try to call a method on an incorrect variable, the following error message returns: Example input: Example rendering: <% -%>, <%= -%> By default, a newline character is inserted after a Ruby block if it is closed at the end of a line: Example input: Example rendering: To change the default behavior, modify the enclosing mark with -%> : Example input: Example rendering: This is used to reduce the number of lines, where Ruby syntax permits, in rendered templates. White spaces in ERB tags are ignored. An example of how this would be used in a report template to remove unnecessary newlines between a FQDN and IP address: Example input: Example rendering: <%# %> Encloses a comment that is ignored during template rendering: Example input: This generates no output. Indentation in ERB templates Because of the varying lengths of the ERB tags, indenting the ERB syntax might seem messy. ERB syntax ignore white space. One method of handling the indentation is to declare the ERB tag at the beginning of each new line and then use white space within the ERB tag to outline the relationships within the syntax, for example: B.4. Troubleshooting ERB templates The Satellite web UI provides two ways to verify the template rendering for a specific host: Directly in the template editor - when editing a template (under Hosts > Templates > Partition tables , Hosts > Templates > Provisioning Templates , or Hosts > Templates > Job templates ), on the Template tab click Preview and select a host from the list. The template then renders in the text field using the selected host's parameters. Preview failures can help to identify issues in your template. At the host's details page - select a host at Hosts > All Hosts and click the Templates tab to list templates associated with the host. Select Review from the list to the selected template to view it's rendered version. B.5. Generic Satellite-specific macros This section lists Satellite-specific macros for ERB templates. You can use the macros listed in the following table across all kinds of templates. Table B.1. Generic macros Name Description indent(n) Indents the block of code by n spaces, useful when using a snippet template that is not indented. foreman_url(kind) Returns the full URL to host-rendered templates of the given kind. For example, templates of the "provision" type usually reside at http://HOST/unattended/provision . snippet(name) Renders the specified snippet template. Useful for nesting provisioning templates. snippets(file) Renders the specified snippet found in the Foreman database, attempts to load it from the unattended/snippets/ directory if it is not found in the database. snippet_if_exists(name) Renders the specified snippet, skips if no snippet with the specified name is found. B.6. Template macros If you want to write custom templates, you can use some of the following macros. Depending on the template type, some of the following macros have different requirements. For more information about the available macros for report templates, in the Satellite web UI, navigate to Monitor > Reports > Report Templates , and click Create Template . In the Create Template window, click the Help tab. For more information about the available macros for job templates, in the Satellite web UI, navigate to Hosts > Templates > Job templates , and click the New Job Template . In the New Job Template window, click the Help tab. input Using the input macro, you can customize the input data that the template can work with. You can define the input name, type, and the options that are available for users. For report templates, you can only use user inputs. When you define a new input and save the template, you can then reference the input in the ERB syntax of the template body. This loads the value from user input cpus . load_hosts Using the load_hosts macro, you can generate a complete list of hosts. Use the load_hosts macro with the each_record macro to load records in batches of 1000 to reduce memory consumption. If you want to filter the list of hosts for the report, you can add the option search: input(' Example_Host ') : In this example, you first create an input that you then use to refine the search criteria that the load_hosts macro retrieves. report_row Using the report_row macro, you can create a formatted report for ease of analysis. The report_row macro requires the report_render macro to generate the output. Example input: Example rendering: You can add extra columns to the report by adding another header. The following example adds IP addresses to the report: Example input: Example rendering: report_render This macro is available only for report templates. Using the report_render macro, you create the output for the report. During the template rendering process, you can select the format that you want for the report. YAML, JSON, HTML, and CSV formats are supported. render_template() This macro is available only for job templates. Using this macro, you can render a specific template. You can also enable and define arguments that you want to pass to the template. truthy Using the truthy macro, you can declare if the value passed is true or false, regardless of whether the value is an integer or boolean or string. This macro helps to avoid confusion when your template contains multiple value types. For example, the boolean value true is not the same as the string value "true" . With this macro, you can declare how you want the template to interpret the value and avoid confusion. You can use truthy to declare values as follows: falsy The falsy macro serves the same purpose as the truthy macro. Using the falsy macro, you can declare if the value passed in is true or false, regardless of whether the value is an integer or boolean or string. You can use falsy to declare values as follows: B.7. Host-specific variables The following variables enable using host data within templates. Note that job templates accept only @host variables. Table B.2. Host-specific variables and macros Name Description @host.architecture The architecture of the host. @host.bond_interfaces Returns an array of all bonded interfaces. See Section B.10, "Parsing arrays" . @host.capabilities The method of system provisioning, can be either build (for example kickstart) or image. @host.certname The SSL certificate name of the host. @host.diskLayout The disk layout of the host. Can be inherited from the operating system. @host.domain The domain of the host. @host.environment Deprecated Use the host_puppet_environment variable instead. The Puppet environment of the host. @host.facts Returns a Ruby hash of facts from Facter. For example to access the 'ipaddress' fact from the output, specify @host.facts['ipaddress']. @host.grub_pass Returns the host's bootloader password. @host.hostgroup The host group of the host. host_enc['parameters'] Returns a Ruby hash containing information on host parameters. For example, use host_enc['parameters']['lifecycle_environment'] to get the lifecycle environment of a host. @host.image_build? Returns true if the host is provisioned using an image. @host.interfaces Contains an array of all available host interfaces including the primary interface. See Section B.10, "Parsing arrays" . @host.interfaces_with_identifier('IDs') Returns array of interfaces with given identifier. You can pass an array of multiple identifiers as an input, for example @host.interfaces_with_identifier(['eth0', 'eth1']). See Section B.10, "Parsing arrays" . @host.ip The IP address of the host. @host.location The location of the host. @host.mac The MAC address of the host. @host.managed_interfaces Returns an array of managed interfaces (excluding BMC and bonded interfaces). See Section B.10, "Parsing arrays" . @host.medium The assigned operating system installation medium. @host.name The full name of the host. @host.operatingsystem.family The operating system family. @host.operatingsystem.major The major version number of the assigned operating system. @host.operatingsystem.minor The minor version number of the assigned operating system. @host.operatingsystem.name The assigned operating system name. @host.operatingsystem.boot_files_uri(medium_provider) Full path to the kernel and initrd, returns an array. @host.os.medium_uri(@host) The URI used for provisioning (path configured in installation media). host_param('parameter_name') Returns the value of the specified host parameter. host_param_false?('parameter_name') Returns false if the specified host parameter evaluates to false. host_param_true?('parameter_name') Returns true if the specified host parameter evaluates to true. @host.primary_interface Returns the primary interface of the host. @host.provider The compute resource provider. @host.provision_interface Returns the provisioning interface of the host. Returns an interface object. @host.ptable The partition table name. @host.puppet_ca_server Deprecated Use the host_puppet_ca_server variable instead. The Puppet CA server the host must use. @host.puppetmaster Deprecated Use the host_puppet_server variable instead. The Puppet server the host must use. @host.pxe_build? Returns true if the host is provisioned using the network or PXE. @host.shortname The short name of the host. @host.sp_ip The IP address of the BMC interface. @host.sp_mac The MAC address of the BMC interface. @host.sp_name The name of the BMC interface. @host.sp_subnet The subnet of the BMC network. @host.subnet.dhcp Returns true if a DHCP proxy is configured for this host. @host.subnet.dns_primary The primary DNS server of the host. @host.subnet.dns_secondary The secondary DNS server of the host. @host.subnet.gateway The gateway of the host. @host.subnet.mask The subnet mask of the host. @host.url_for_boot(:initrd) Full path to the initrd image associated with this host. Not recommended, as it does not interpolate variables. @host.url_for_boot(:kernel) Full path to the kernel associated with this host. Not recommended, as it does not interpolate variables, prefer boot_files_uri. @provisioning_type Equals to 'host' or 'hostgroup' depending on type of provisioning. @static Returns true if the network configuration is static. @template_name Name of the template being rendered. grub_pass Returns a bootloader argument to set the encrypted bootloader password, such as --md5pass=#{@host.grub_pass} . ks_console Returns a string assembled using the port and the baud rate of the host which can be added to a kernel line. For example console=ttyS1,9600 . root_pass Returns the root password configured for the system. The majority of common Ruby methods can be applied on host-specific variables. For example, to extract the last segment of the host's IP address, you can use: B.8. Kickstart-specific variables The following variables are designed to be used within kickstart provisioning templates. Table B.3. Kickstart-specific variables Name Description @arch The host architecture name, same as @host.architecture.name. @dynamic Returns true if the partition table being used is a %pre script (has the #Dynamic option as the first line of the table). @epel A command which will automatically install the correct version of the epel-release rpm. Use in a %post script. @mediapath The full kickstart line to provide the URL command. @osver The operating system major version number, same as @host.operatingsystem.major. B.9. Conditional statements In your templates, you might perform different actions depending on which value exists. To achieve this, you can use conditional statements in your ERB syntax. In the following example, the ERB syntax searches for a specific host name and returns an output depending on the value it finds: Example input Example rendering B.10. Parsing arrays While writing or modifying templates, you might encounter variables that return arrays. For example, host variables related to network interfaces, such as @host.interfaces or @host.bond_interfaces , return interface data grouped in an array. To extract a parameter value of a specific interface, use Ruby methods to parse the array. Finding the correct method to parse an array The following procedure is an example that you can use to find the relevant methods to parse arrays in your template. In this example, a report template is used, but the steps are applicable to other templates. To retrieve the NIC of a content host, in this example, using the @host.interfaces variable returns class values that you can then use to find methods to parse the array. Example input: Example rendering: In the Create Template window, click the Help tab and search for the ActiveRecord_Associations_CollectionProxy and Nic::Base classes. For ActiveRecord_Associations_CollectionProxy , in the Allowed methods or members column, you can view the following methods to parse the array: For Nic::Base , in the Allowed methods or members column, you can view the following method to parse the array: To iterate through an interface array, add the relevant methods to the ERB syntax: Example input: Example rendering: B.11. Example template snippets Checking if a host has Puppet and Puppetlabs enabled The following example checks if the host has the Puppet and Puppetlabs repositories enabled: Capturing major and minor versions of a host's operating system The following example shows how to capture the minor and major version of the host's operating system, which can be used for package related decisions: Importing snippets to a template The following example imports the subscription_manager_registration snippet to the template and indents it by four spaces: Conditionally importing a Kickstart snippet The following example imports the kickstart_networking_setup snippet if the host's subnet has the DHCP boot mode enabled: Parsing values from host custom facts You can use the host.facts variable to parse values from a host's facts and custom facts. In this example luks_stat is a custom fact that you can parse in the same manner as dmi::system::serial_number , which is a host fact: In this example, you can customize the Applicable Errata report template to parse for custom information about the kernel version of each host: | [
"<% if @host.operatingsystem.family == \"Redhat\" && @host.operatingsystem.major.to_i > 6 -%> systemctl <%= input(\"action\") %> <%= input(\"service\") %> <% else -%> service <%= input(\"service\") %> <%= input(\"action\") %> <% end -%>",
"echo <%= @host.name %>",
"host.example.com",
"<% server_name = @host.fqdn %> <%= server_name %>",
"host.example.com",
"<%= @ example_incorrect_variable .fqdn -%>",
"undefined method `fqdn' for nil:NilClass",
"<%= \"line1\" %> <%= \"line2\" %>",
"line1 line2",
"<%= \"line1\" -%> <%= \"line2\" %>",
"line1line2",
"<%= @host.fqdn -%> <%= @host.ip -%>",
"host.example.com10.10.181.216",
"<%# A comment %>",
"<%- load_hosts.each do |host| -%> <%- if host.build? %> <%= host.name %> build is in progress <%- end %> <%- end %>",
"<%= input('cpus') %>",
"<%- load_hosts().each_record do |host| -%> <%= host.name %>",
"<% load_hosts(search: input(' Example_Host ')).each_record do |host| -%> <%= host.name %> <% end -%>",
"<%- load_hosts(search: input(' Example_Host ')).each_record do |host| -%> <%- report_row( 'Server FQDN': host.name ) -%> <%- end -%> <%= report_render -%>",
"Server FQDN host1.example.com host2.example.com host3.example.com host4.example.com host5.example.com host6.example.com",
"<%- load_hosts(search: input('host')).each_record do |host| -%> <%- report_row( 'Server FQDN': host.name, 'IP': host.ip ) -%> <%- end -%> <%= report_render -%>",
"Server FQDN,IP host1.example.com , 10.8.30.228 host2.example.com , 10.8.30.227 host3.example.com , 10.8.30.226 host4.example.com , 10.8.30.225 host5.example.com , 10.8.30.224 host6.example.com , 10.8.30.223",
"<%= report_render -%>",
"truthy?(\"true\") => true truthy?(1) => true truthy?(\"false\") => false truthy?(0) => false",
"falsy?(\"true\") => false falsy?(1) => false falsy?(\"false\") => true falsy?(0) => true",
"<% @host.ip.split('.').last %>",
"<% load_hosts().each_record do |host| -%> <% if @host.name == \" host1.example.com \" -%> <% result=\"positive\" -%> <% else -%> <% result=\"negative\" -%> <% end -%> <%= result -%>",
"host1.example.com positive",
"<%= @host.interfaces -%>",
"<Nic::Base::ActiveRecord_Associations_CollectionProxy:0x00007f734036fbe0>",
"[] each find_in_batches first map size to_a",
"alias? attached_devices attached_devices_identifiers attached_to bond_options children_mac_addresses domain fqdn identifier inheriting_mac ip ip6 link mac managed? mode mtu nic_delay physical? primary provision shortname subnet subnet6 tag virtual? vlanid",
"<% load_hosts().each_record do |host| -%> <% host.interfaces.each do |iface| -%> iface.alias?: <%= iface.alias? %> iface.attached_to: <%= iface.attached_to %> iface.bond_options: <%= iface.bond_options %> iface.children_mac_addresses: <%= iface.children_mac_addresses %> iface.domain: <%= iface.domain %> iface.fqdn: <%= iface.fqdn %> iface.identifier: <%= iface.identifier %> iface.inheriting_mac: <%= iface.inheriting_mac %> iface.ip: <%= iface.ip %> iface.ip6: <%= iface.ip6 %> iface.link: <%= iface.link %> iface.mac: <%= iface.mac %> iface.managed?: <%= iface.managed? %> iface.mode: <%= iface.mode %> iface.mtu: <%= iface.mtu %> iface.physical?: <%= iface.physical? %> iface.primary: <%= iface.primary %> iface.provision: <%= iface.provision %> iface.shortname: <%= iface.shortname %> iface.subnet: <%= iface.subnet %> iface.subnet6: <%= iface.subnet6 %> iface.tag: <%= iface.tag %> iface.virtual?: <%= iface.virtual? %> iface.vlanid: <%= iface.vlanid %> <%- end -%>",
"host1.example.com iface.alias?: false iface.attached_to: iface.bond_options: iface.children_mac_addresses: [] iface.domain: iface.fqdn: host1.example.com iface.identifier: ens192 iface.inheriting_mac: 00:50:56:8d:4c:cf iface.ip: 10.10.181.13 iface.ip6: iface.link: true iface.mac: 00:50:56:8d:4c:cf iface.managed?: true iface.mode: balance-rr iface.mtu: iface.physical?: true iface.primary: true iface.provision: true iface.shortname: host1.example.com iface.subnet: iface.subnet6: iface.tag: iface.virtual?: false iface.vlanid:",
"<% pm_set = @host.puppetmaster.empty? ? false : true puppet_enabled = pm_set || host_param_true?('force-puppet') puppetlabs_enabled = host_param_true?('enable-puppetlabs-repo') %>",
"<% os_major = @host.operatingsystem.major.to_i os_minor = @host.operatingsystem.minor.to_i %> <% if ((os_minor < 2) && (os_major < 14)) -%> <% end -%>",
"<%= indent 4 do snippet 'subscription_manager_registration' end %>",
"<% subnet = @host.subnet %> <% if subnet.respond_to?(:dhcp_boot_mode?) -%> <%= snippet 'kickstart_networking_setup' %> <% end -%>",
"'Serial': host.facts['dmi::system::serial_number'], 'Encrypted': host.facts['luks_stat'],",
"<%- report_row( 'Host': host.name, 'Operating System': host.operatingsystem, 'Kernel': host.facts['uname::release'], 'Environment': host.single_lifecycle_environment ? host.single_lifecycle_environment.name : nil, 'Erratum': erratum.errata_id, 'Type': erratum.errata_type, 'Published': erratum.issued, 'Applicable since': erratum.created_at, 'Severity': erratum.severity, 'Packages': erratum.package_names, 'CVEs': erratum.cves, 'Reboot suggested': erratum.reboot_suggested, ) -%>"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_hosts/Template_Writing_Reference_managing-hosts |
Part I. Vulnerability reporting with Clair on Red Hat Quay overview | Part I. Vulnerability reporting with Clair on Red Hat Quay overview The content in this guide explains the key purposes and concepts of Clair on Red Hat Quay. It also contains information about Clair releases and the location of official Clair containers. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/vulnerability_reporting_with_clair_on_red_hat_quay/vulnerability-reporting-clair-quay-overview |
Chapter 8. Configuring OpenShift Serverless Functions | Chapter 8. Configuring OpenShift Serverless Functions To improve the process of deployment of your application code, you can use OpenShift Serverless to deploy stateless, event-driven functions as a Knative service on OpenShift Container Platform. If you want to develop functions, you must complete the set up steps. 8.1. Prerequisites To enable the use of OpenShift Serverless Functions on your cluster, you must complete the following steps: The OpenShift Serverless Operator and Knative Serving are installed on your cluster. Note Functions are deployed as a Knative service. If you want to use event-driven architecture with your functions, you must also install Knative Eventing. You have the oc CLI installed. You have the Knative ( kn ) CLI installed. Installing the Knative CLI enables the use of kn func commands which you can use to create and manage functions. You have installed Docker Container Engine or Podman version 3.4.7 or higher. You have access to an available image registry, such as the OpenShift Container Registry. If you are using Quay.io as the image registry, you must ensure that either the repository is not private, or that you have followed the OpenShift Container Platform documentation on Allowing pods to reference images from other secured registries . If you are using the OpenShift Container Registry, a cluster administrator must expose the registry . 8.2. Setting up Podman To use advanced container management features, you might want to use Podman with OpenShift Serverless Functions. To do so, you need to start the Podman service and configure the Knative ( kn ) CLI to connect to it. Procedure Start the Podman service that serves the Docker API on a UNIX socket at USD{XDG_RUNTIME_DIR}/podman/podman.sock : USD systemctl start --user podman.socket Note On most systems, this socket is located at /run/user/USD(id -u)/podman/podman.sock . Establish the environment variable that is used to build a function: USD export DOCKER_HOST="unix://USD{XDG_RUNTIME_DIR}/podman/podman.sock" Run the build command inside your function project directory with the -v flag to see verbose output. You should see a connection to your local UNIX socket: USD kn func build -v 8.3. Setting up Podman on macOS To use advanced container management features, you might want to use Podman with OpenShift Serverless Functions. To do so on macOS, you need to start the Podman machine and configure the Knative ( kn ) CLI to connect to it. Procedure Create the Podman machine: USD podman machine init --memory=8192 --cpus=2 --disk-size=20 Start the Podman machine, which serves the Docker API on a UNIX socket: USD podman machine start Starting machine "podman-machine-default" Waiting for VM ... Mounting volume... /Users/myuser:/Users/user [...truncated output...] You can still connect Docker API clients by setting DOCKER_HOST using the following command in your terminal session: export DOCKER_HOST='unix:///Users/myuser/.local/share/containers/podman/machine/podman-machine-default/podman.sock' Machine "podman-machine-default" started successfully Note On most macOS systems, this socket is located at /Users/myuser/.local/share/containers/podman/machine/podman-machine-default/podman.sock . Establish the environment variable that is used to build a function: USD export DOCKER_HOST='unix:///Users/myuser/.local/share/containers/podman/machine/podman-machine-default/podman.sock' Run the build command inside your function project directory with the -v flag to see verbose output. You should see a connection to your local UNIX socket: USD kn func build -v 8.4. steps For more information about Docker Container Engine or Podman, see Container build tool options . See Getting started with functions . | [
"systemctl start --user podman.socket",
"export DOCKER_HOST=\"unix://USD{XDG_RUNTIME_DIR}/podman/podman.sock\"",
"kn func build -v",
"podman machine init --memory=8192 --cpus=2 --disk-size=20",
"podman machine start Starting machine \"podman-machine-default\" Waiting for VM Mounting volume... /Users/myuser:/Users/user [...truncated output...] You can still connect Docker API clients by setting DOCKER_HOST using the following command in your terminal session: export DOCKER_HOST='unix:///Users/myuser/.local/share/containers/podman/machine/podman-machine-default/podman.sock' Machine \"podman-machine-default\" started successfully",
"export DOCKER_HOST='unix:///Users/myuser/.local/share/containers/podman/machine/podman-machine-default/podman.sock'",
"kn func build -v"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/installing_openshift_serverless/configuring-serverless-functions |
Part VI. Custom tasks and work item handlers | Part VI. Custom tasks and work item handlers As a business rules developer, you can create custom tasks and work item handlers to execute custom code within your process flows and extend the operations available for use in Red Hat Process Automation Manager. You can use custom tasks to develop operations that Red Hat Process Automation Manager does not directly provide and include them in process diagrams. Custom tasks and work item handlers support Business Central, standalone editors, and Visual Studio Code (VS Code). This chapter describes how to use Business Central to create, manage, and deploy custom tasks and work item handlers. In Business Central, each task in a process diagram has a WorkItemDefinition Java class with an associated WorkItemHandler Java class. You can define a WorkItemDefinition using an MVFLEX Expression Language (MVEL) map as part of a MVEL list in a .wid file. Place the .wid file in the global directory under the project root or in the directory of the BPMN file. The work item handler contains Java code registered with Business Central and implements org.kie.api.runtime.process.WorkItemHandler . The Java code of the work item handler is executed when the task is triggered. You can customize and register a work item handler to execute your own Java code in custom tasks. Prerequisites Business Central is deployed and is running on a web or application server. You are logged in to Business Central. Maven is installed. The host has access to the Internet. The build process uses the Internet for downloading Maven packages from external repositories. Your system has access to the Red Hat Maven repository either locally or online | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/assembly-custom-tasks-and-work-item-handlers |
Chapter 20. External DNS Operator | Chapter 20. External DNS Operator 20.1. External DNS Operator release notes The External DNS Operator deploys and manages ExternalDNS to provide name resolution for services and routes from the external DNS provider to OpenShift Container Platform. Important The External DNS Operator is only supported on the x86_64 architecture. These release notes track the development of the External DNS Operator in OpenShift Container Platform. 20.1.1. External DNS Operator 1.2.0 The following advisory is available for the External DNS Operator version 1.2.0: RHEA-2022:5867 ExternalDNS Operator 1.2 operator/operand containers 20.1.1.1. New features The External DNS Operator now supports AWS shared VPC. For more information, see Creating DNS records in a different AWS Account using a shared VPC . 20.1.1.2. Bug fixes The update strategy for the operand changed from Rolling to Recreate . ( OCPBUGS-3630 ) 20.1.2. External DNS Operator 1.1.1 The following advisory is available for the External DNS Operator version 1.1.1: RHEA-2024:0536 ExternalDNS Operator 1.1 operator/operand containers 20.1.3. External DNS Operator 1.1.0 This release included a rebase of the operand from the upstream project version 0.13.1. The following advisory is available for the External DNS Operator version 1.1.0: RHEA-2022:9086-01 ExternalDNS Operator 1.1 operator/operand containers 20.1.3.1. Bug fixes Previously, the ExternalDNS Operator enforced an empty defaultMode value for volumes, which caused constant updates due to a conflict with the OpenShift API. Now, the defaultMode value is not enforced and operand deployment does not update constantly. ( OCPBUGS-2793 ) 20.1.4. External DNS Operator 1.0.1 The following advisory is available for the External DNS Operator version 1.0.1: RHEA-2024:0537 ExternalDNS Operator 1.0 operator/operand containers 20.1.5. External DNS Operator 1.0.0 The following advisory is available for the External DNS Operator version 1.0.0: RHEA-2022:5867 ExternalDNS Operator 1.0 operator/operand containers 20.1.5.1. Bug fixes Previously, the External DNS Operator issued a warning about the violation of the restricted SCC policy during ExternalDNS operand pod deployments. This issue has been resolved. ( BZ#2086408 ) 20.2. External DNS Operator in OpenShift Container Platform The External DNS Operator deploys and manages ExternalDNS to provide the name resolution for services and routes from the external DNS provider to OpenShift Container Platform. 20.2.1. External DNS Operator The External DNS Operator implements the External DNS API from the olm.openshift.io API group. The External DNS Operator updates services, routes, and external DNS providers. Prerequisites You have installed the yq CLI tool. Procedure You can deploy the External DNS Operator on demand from the OperatorHub. Deploying the External DNS Operator creates a Subscription object. Check the name of an install plan by running the following command: USD oc -n external-dns-operator get sub external-dns-operator -o yaml | yq '.status.installplan.name' Example output install-zcvlr Check if the status of an install plan is Complete by running the following command: USD oc -n external-dns-operator get ip <install_plan_name> -o yaml | yq '.status.phase' Example output Complete View the status of the external-dns-operator deployment by running the following command: USD oc get -n external-dns-operator deployment/external-dns-operator Example output NAME READY UP-TO-DATE AVAILABLE AGE external-dns-operator 1/1 1 1 23h 20.2.2. External DNS Operator logs You can view External DNS Operator logs by using the oc logs command. Procedure View the logs of the External DNS Operator by running the following command: USD oc logs -n external-dns-operator deployment/external-dns-operator -c external-dns-operator 20.2.2.1. External DNS Operator domain name limitations The External DNS Operator uses the TXT registry which adds the prefix for TXT records. This reduces the maximum length of the domain name for TXT records. A DNS record cannot be present without a corresponding TXT record, so the domain name of the DNS record must follow the same limit as the TXT records. For example, a DNS record of <domain_name_from_source> results in a TXT record of external-dns-<record_type>-<domain_name_from_source> . The domain name of the DNS records generated by the External DNS Operator has the following limitations: Record type Number of characters CNAME 44 Wildcard CNAME records on AzureDNS 42 A 48 Wildcard A records on AzureDNS 46 The following error appears in the External DNS Operator logs if the generated domain name exceeds any of the domain name limitations: time="2022-09-02T08:53:57Z" level=error msg="Failure in zone test.example.io. [Id: /hostedzone/Z06988883Q0H0RL6UMXXX]" time="2022-09-02T08:53:57Z" level=error msg="InvalidChangeBatch: [FATAL problem: DomainLabelTooLong (Domain label is too long) encountered with 'external-dns-a-hello-openshift-aaaaaaaaaa-bbbbbbbbbb-ccccccc']\n\tstatus code: 400, request id: e54dfd5a-06c6-47b0-bcb9-a4f7c3a4e0c6" 20.3. Installing External DNS Operator on cloud providers You can install the External DNS Operator on cloud providers such as AWS, Azure, and GCP. 20.3.1. Installing the External DNS Operator with OperatorHub You can install the External DNS Operator by using the OpenShift Container Platform OperatorHub. Procedure Click Operators OperatorHub in the OpenShift Container Platform web console. Click External DNS Operator . You can use the Filter by keyword text box or the filter list to search for External DNS Operator from the list of Operators. Select the external-dns-operator namespace. On the External DNS Operator page, click Install . On the Install Operator page, ensure that you selected the following options: Update the channel as stable-v1 . Installation mode as A specific name on the cluster . Installed namespace as external-dns-operator . If namespace external-dns-operator does not exist, it gets created during the Operator installation. Select Approval Strategy as Automatic or Manual . Approval Strategy is set to Automatic by default. Click Install . If you select Automatic updates, the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Verification Verify that the External DNS Operator shows the Status as Succeeded on the Installed Operators dashboard. 20.3.2. Installing the External DNS Operator by using the CLI You can install the External DNS Operator by using the CLI. Prerequisites You are logged in to the OpenShift Container Platform web console as a user with cluster-admin permissions. You are logged into the OpenShift CLI ( oc ). Procedure Create a Namespace object: Create a YAML file that defines the Namespace object: Example namespace.yaml file apiVersion: v1 kind: Namespace metadata: name: external-dns-operator Create the Namespace object by running the following command: USD oc apply -f namespace.yaml Create an OperatorGroup object: Create a YAML file that defines the OperatorGroup object: Example operatorgroup.yaml file apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: external-dns-operator namespace: external-dns-operator spec: upgradeStrategy: Default targetNamespaces: - external-dns-operator Create the OperatorGroup object by running the following command: USD oc apply -f operatorgroup.yaml Create a Subscription object: Create a YAML file that defines the Subscription object: Example subscription.yaml file apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: external-dns-operator namespace: external-dns-operator spec: channel: stable-v1 installPlanApproval: Automatic name: external-dns-operator source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription object by running the following command: USD oc apply -f subscription.yaml Verification Get the name of the install plan from the subscription by running the following command: USD oc -n external-dns-operator \ get subscription external-dns-operator \ --template='{{.status.installplan.name}}{{"\n"}}' Verify that the status of the install plan is Complete by running the following command: USD oc -n external-dns-operator \ get ip <install_plan_name> \ --template='{{.status.phase}}{{"\n"}}' Verify that the status of the external-dns-operator pod is Running by running the following command: USD oc -n external-dns-operator get pod Example output NAME READY STATUS RESTARTS AGE external-dns-operator-5584585fd7-5lwqm 2/2 Running 0 11m Verify that the catalog source of the subscription is redhat-operators by running the following command: USD oc -n external-dns-operator get subscription Example output NAME PACKAGE SOURCE CHANNEL external-dns-operator external-dns-operator redhat-operators stable-v1 Check the external-dns-operator version by running the following command: USD oc -n external-dns-operator get csv Example output NAME DISPLAY VERSION REPLACES PHASE external-dns-operator.v<1.y.z> ExternalDNS Operator <1.y.z> Succeeded 20.4. External DNS Operator configuration parameters The External DNS Operator includes the following configuration parameters. 20.4.1. External DNS Operator configuration parameters The External DNS Operator includes the following configuration parameters: Parameter Description spec Enables the type of a cloud provider. spec: provider: type: AWS 1 aws: credentials: name: aws-access-key 2 1 Defines available options such as AWS, GCP, Azure, and Infoblox. 2 Defines a secret name for your cloud provider. zones Enables you to specify DNS zones by their domains. If you do not specify zones, the ExternalDNS resource discovers all of the zones present in your cloud provider account. zones: - "myzoneid" 1 1 Specifies the name of DNS zones. domains Enables you to specify AWS zones by their domains. If you do not specify domains, the ExternalDNS resource discovers all of the zones present in your cloud provider account. domains: - filterType: Include 1 matchType: Exact 2 name: "myzonedomain1.com" 3 - filterType: Include matchType: Pattern 4 pattern: ".*\\.otherzonedomain\\.com" 5 1 Ensures that the ExternalDNS resource includes the domain name. 2 Instructs ExternalDNS that the domain matching has to be exact as opposed to regular expression match. 3 Defines the name of the domain. 4 Sets the regex-domain-filter flag in the ExternalDNS resource. You can limit possible domains by using a Regex filter. 5 Defines the regex pattern to be used by the ExternalDNS resource to filter the domains of the target zones. source Enables you to specify the source for the DNS records, Service or Route . source: 1 type: Service 2 service: serviceType: 3 - LoadBalancer - ClusterIP labelFilter: 4 matchLabels: external-dns.mydomain.org/publish: "yes" hostnameAnnotation: "Allow" 5 fqdnTemplate: - "{{.Name}}.myzonedomain.com" 6 1 Defines the settings for the source of DNS records. 2 The ExternalDNS resource uses the Service type as the source for creating DNS records. 3 Sets the service-type-filter flag in the ExternalDNS resource. The serviceType contains the following fields: default : LoadBalancer expected : ClusterIP NodePort LoadBalancer ExternalName 4 Ensures that the controller considers only those resources which matches with label filter. 5 The default value for hostnameAnnotation is Ignore which instructs ExternalDNS to generate DNS records using the templates specified in the field fqdnTemplates . When the value is Allow the DNS records get generated based on the value specified in the external-dns.alpha.kubernetes.io/hostname annotation. 6 The External DNS Operator uses a string to generate DNS names from sources that don't define a hostname, or to add a hostname suffix when paired with the fake source. source: type: OpenShiftRoute 1 openshiftRouteOptions: routerName: default 2 labelFilter: matchLabels: external-dns.mydomain.org/publish: "yes" 1 Creates DNS records. 2 If the source type is OpenShiftRoute , then you can pass the Ingress Controller name. The ExternalDNS resource uses the canonical name of the Ingress Controller as the target for CNAME records. 20.5. Creating DNS records on AWS You can create DNS records on AWS and AWS GovCloud by using External DNS Operator. 20.5.1. Creating DNS records on an public hosted zone for AWS by using Red Hat External DNS Operator You can create DNS records on a public hosted zone for AWS by using the Red Hat External DNS Operator. You can use the same instructions to create DNS records on a hosted zone for AWS GovCloud. Procedure Check the user. The user must have access to the kube-system namespace. If you don't have the credentials, as you can fetch the credentials from the kube-system namespace to use the cloud provider client: USD oc whoami Example output system:admin Fetch the values from aws-creds secret present in kube-system namespace. USD export AWS_ACCESS_KEY_ID=USD(oc get secrets aws-creds -n kube-system --template={{.data.aws_access_key_id}} | base64 -d) USD export AWS_SECRET_ACCESS_KEY=USD(oc get secrets aws-creds -n kube-system --template={{.data.aws_secret_access_key}} | base64 -d) Get the routes to check the domain: USD oc get routes --all-namespaces | grep console Example output openshift-console console console-openshift-console.apps.testextdnsoperator.apacshift.support console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.testextdnsoperator.apacshift.support downloads http edge/Redirect None Get the list of dns zones to find the one which corresponds to the previously found route's domain: USD aws route53 list-hosted-zones | grep testextdnsoperator.apacshift.support Example output HOSTEDZONES terraform /hostedzone/Z02355203TNN1XXXX1J6O testextdnsoperator.apacshift.support. 5 Create ExternalDNS resource for route source: USD cat <<EOF | oc create -f - apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-aws 1 spec: domains: - filterType: Include 2 matchType: Exact 3 name: testextdnsoperator.apacshift.support 4 provider: type: AWS 5 source: 6 type: OpenShiftRoute 7 openshiftRouteOptions: routerName: default 8 EOF 1 Defines the name of external DNS resource. 2 By default all hosted zones are selected as potential targets. You can include a hosted zone that you need. 3 The matching of the target zone's domain has to be exact (as opposed to regular expression match). 4 Specify the exact domain of the zone you want to update. The hostname of the routes must be subdomains of the specified domain. 5 Defines the AWS Route53 DNS provider. 6 Defines options for the source of DNS records. 7 Defines OpenShift route resource as the source for the DNS records which gets created in the previously specified DNS provider. 8 If the source is OpenShiftRoute , then you can pass the OpenShift Ingress Controller name. External DNS Operator selects the canonical hostname of that router as the target while creating CNAME record. Check the records created for OCP routes using the following command: USD aws route53 list-resource-record-sets --hosted-zone-id Z02355203TNN1XXXX1J6O --query "ResourceRecordSets[?Type == 'CNAME']" | grep console 20.5.2. Creating DNS records in a different AWS Account using a shared VPC You can use the ExternalDNS Operator to create DNS records in a different AWS account using a shared Virtual Private Cloud (VPC). By using a shared VPC, an organization can connect resources from multiple projects to a common VPC network. Organizations can then use VPC sharing to use a single Route 53 instance across multiple AWS accounts. Prerequisites You have created two Amazon AWS accounts: one with a VPC and a Route 53 private hosted zone configured (Account A), and another for installing a cluster (Account B). You have created an IAM Policy and IAM Role with the appropriate permissions in Account A for Account B to create DNS records in the Route 53 hosted zone of Account A. You have installed a cluster in Account B into the existing VPC for Account A. You have installed the ExternalDNS Operator in the cluster in Account B. Procedure Get the Role ARN of the IAM Role that you created to allow Account B to access Account A's Route 53 hosted zone by running the following command: USD aws --profile account-a iam get-role --role-name user-rol1 | head -1 Example output ROLE arn:aws:iam::1234567890123:role/user-rol1 2023-09-14T17:21:54+00:00 3600 / AROA3SGB2ZRKRT5NISNJN user-rol1 Locate the private hosted zone to use with Account A's credentials by running the following command: USD aws --profile account-a route53 list-hosted-zones | grep testextdnsoperator.apacshift.support Example output HOSTEDZONES terraform /hostedzone/Z02355203TNN1XXXX1J6O testextdnsoperator.apacshift.support. 5 Create the ExternalDNS object by running the following command: USD cat <<EOF | oc create -f - apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-aws spec: domains: - filterType: Include matchType: Exact name: testextdnsoperator.apacshift.support provider: type: AWS aws: assumeRole: arn: arn:aws:iam::12345678901234:role/user-rol1 1 source: type: OpenShiftRoute openshiftRouteOptions: routerName: default EOF 1 Specify the Role ARN to have DNS records created in Account A. Check the records created for OpenShift Container Platform (OCP) routes by using the following command: USD aws --profile account-a route53 list-resource-record-sets --hosted-zone-id Z02355203TNN1XXXX1J6O --query "ResourceRecordSets[?Type == 'CNAME']" | grep console-openshift-console 20.6. Creating DNS records on Azure You can create DNS records on Azure by using the External DNS Operator. Important Using the External DNS Operator on a Microsoft Entra Workload ID-enabled cluster or a cluster that runs in Microsoft Azure Government (MAG) regions is not supported. 20.6.1. Creating DNS records on an Azure public DNS zone You can create DNS records on a public DNS zone for Azure by using the External DNS Operator. Prerequisites You must have administrator privileges. The admin user must have access to the kube-system namespace. Procedure Fetch the credentials from the kube-system namespace to use the cloud provider client by running the following command: USD CLIENT_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_id}} | base64 -d) USD CLIENT_SECRET=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_secret}} | base64 -d) USD RESOURCE_GROUP=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_resourcegroup}} | base64 -d) USD SUBSCRIPTION_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_subscription_id}} | base64 -d) USD TENANT_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_tenant_id}} | base64 -d) Log in to Azure by running the following command: USD az login --service-principal -u "USD{CLIENT_ID}" -p "USD{CLIENT_SECRET}" --tenant "USD{TENANT_ID}" Get a list of routes by running the following command: USD oc get routes --all-namespaces | grep console Example output openshift-console console console-openshift-console.apps.test.azure.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.azure.example.com downloads http edge/Redirect None Get a list of DNS zones by running the following command: USD az network dns zone list --resource-group "USD{RESOURCE_GROUP}" Create a YAML file, for example, external-dns-sample-azure.yaml , that defines the ExternalDNS object: Example external-dns-sample-azure.yaml file apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-azure 1 spec: zones: - "/subscriptions/1234567890/resourceGroups/test-azure-xxxxx-rg/providers/Microsoft.Network/dnszones/test.azure.example.com" 2 provider: type: Azure 3 source: openshiftRouteOptions: 4 routerName: default 5 type: OpenShiftRoute 6 1 Specifies the External DNS name. 2 Defines the zone ID. 3 Defines the provider type. 4 You can define options for the source of DNS records. 5 If the source type is OpenShiftRoute , you can pass the OpenShift Ingress Controller name. External DNS selects the canonical hostname of that router as the target while creating CNAME record. 6 Defines the route resource as the source for the Azure DNS records. Check the DNS records created for OpenShift Container Platform routes by running the following command: USD az network dns record-set list -g "USD{RESOURCE_GROUP}" -z test.azure.example.com | grep console Note To create records on private hosted zones on private Azure DNS, you need to specify the private zone under the zones field which populates the provider type to azure-private-dns in the ExternalDNS container arguments. 20.7. Creating DNS records on GCP You can create DNS records on Google Cloud Platform (GCP) by using the External DNS Operator. Important Using the External DNS Operator on a cluster with GCP Workload Identity enabled is not supported. For more information about the GCP Workload Identity, see GCP Workload Identity . 20.7.1. Creating DNS records on a public managed zone for GCP You can create DNS records on a public managed zone for GCP by using the External DNS Operator. Prerequisites You must have administrator privileges. Procedure Copy the gcp-credentials secret in the encoded-gcloud.json file by running the following command: USD oc get secret gcp-credentials -n kube-system --template='{{USDv := index .data "service_account.json"}}{{USDv}}' | base64 -d - > decoded-gcloud.json Export your Google credentials by running the following command: USD export GOOGLE_CREDENTIALS=decoded-gcloud.json Activate your account by using the following command: USD gcloud auth activate-service-account <client_email as per decoded-gcloud.json> --key-file=decoded-gcloud.json Set your project by running the following command: USD gcloud config set project <project_id as per decoded-gcloud.json> Get a list of routes by running the following command: USD oc get routes --all-namespaces | grep console Example output openshift-console console console-openshift-console.apps.test.gcp.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.gcp.example.com downloads http edge/Redirect None Get a list of managed zones by running the following command: USD gcloud dns managed-zones list | grep test.gcp.example.com Example output qe-cvs4g-private-zone test.gcp.example.com Create a YAML file, for example, external-dns-sample-gcp.yaml , that defines the ExternalDNS object: Example external-dns-sample-gcp.yaml file apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-gcp 1 spec: domains: - filterType: Include 2 matchType: Exact 3 name: test.gcp.example.com 4 provider: type: GCP 5 source: openshiftRouteOptions: 6 routerName: default 7 type: OpenShiftRoute 8 1 Specifies the External DNS name. 2 By default, all hosted zones are selected as potential targets. You can include your hosted zone. 3 The domain of the target must match the string defined by the name key. 4 Specify the exact domain of the zone you want to update. The hostname of the routes must be subdomains of the specified domain. 5 Defines the provider type. 6 You can define options for the source of DNS records. 7 If the source type is OpenShiftRoute , you can pass the OpenShift Ingress Controller name. External DNS selects the canonical hostname of that router as the target while creating CNAME record. 8 Defines the route resource as the source for GCP DNS records. Check the DNS records created for OpenShift Container Platform routes by running the following command: USD gcloud dns record-sets list --zone=qe-cvs4g-private-zone | grep console 20.8. Creating DNS records on Infoblox You can create DNS records on Infoblox by using the External DNS Operator. 20.8.1. Creating DNS records on a public DNS zone on Infoblox You can create DNS records on a public DNS zone on Infoblox by using the External DNS Operator. Prerequisites You have access to the OpenShift CLI ( oc ). You have access to the Infoblox UI. Procedure Create a secret object with Infoblox credentials by running the following command: USD oc -n external-dns-operator create secret generic infoblox-credentials --from-literal=EXTERNAL_DNS_INFOBLOX_WAPI_USERNAME=<infoblox_username> --from-literal=EXTERNAL_DNS_INFOBLOX_WAPI_PASSWORD=<infoblox_password> Get a list of routes by running the following command: USD oc get routes --all-namespaces | grep console Example Output openshift-console console console-openshift-console.apps.test.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.example.com downloads http edge/Redirect None Create a YAML file, for example, external-dns-sample-infoblox.yaml , that defines the ExternalDNS object: Example external-dns-sample-infoblox.yaml file apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-infoblox 1 spec: provider: type: Infoblox 2 infoblox: credentials: name: infoblox-credentials gridHost: USD{INFOBLOX_GRID_PUBLIC_IP} wapiPort: 443 wapiVersion: "2.3.1" domains: - filterType: Include matchType: Exact name: test.example.com source: type: OpenShiftRoute 3 openshiftRouteOptions: routerName: default 4 1 Specifies the External DNS name. 2 Defines the provider type. 3 You can define options for the source of DNS records. 4 If the source type is OpenShiftRoute , you can pass the OpenShift Ingress Controller name. External DNS selects the canonical hostname of that router as the target while creating CNAME record. Create the ExternalDNS resource on Infoblox by running the following command: USD oc create -f external-dns-sample-infoblox.yaml From the Infoblox UI, check the DNS records created for console routes: Click Data Management DNS Zones . Select the zone name. 20.9. Configuring the cluster-wide proxy on the External DNS Operator After configuring the cluster-wide proxy, the Operator Lifecycle Manager (OLM) triggers automatic updates to all of the deployed Operators with the new contents of the HTTP_PROXY , HTTPS_PROXY , and NO_PROXY environment variables. 20.9.1. Trusting the certificate authority of the cluster-wide proxy You can configure the External DNS Operator to trust the certificate authority of the cluster-wide proxy. Procedure Create the config map to contain the CA bundle in the external-dns-operator namespace by running the following command: USD oc -n external-dns-operator create configmap trusted-ca To inject the trusted CA bundle into the config map, add the config.openshift.io/inject-trusted-cabundle=true label to the config map by running the following command: USD oc -n external-dns-operator label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true Update the subscription of the External DNS Operator by running the following command: USD oc -n external-dns-operator patch subscription external-dns-operator --type='json' -p='[{"op": "add", "path": "/spec/config", "value":{"env":[{"name":"TRUSTED_CA_CONFIGMAP_NAME","value":"trusted-ca"}]}}]' Verification After the deployment of the External DNS Operator is completed, verify that the trusted CA environment variable is added to the external-dns-operator deployment by running the following command: USD oc -n external-dns-operator exec deploy/external-dns-operator -c external-dns-operator -- printenv TRUSTED_CA_CONFIGMAP_NAME Example output trusted-ca | [
"oc -n external-dns-operator get sub external-dns-operator -o yaml | yq '.status.installplan.name'",
"install-zcvlr",
"oc -n external-dns-operator get ip <install_plan_name> -o yaml | yq '.status.phase'",
"Complete",
"oc get -n external-dns-operator deployment/external-dns-operator",
"NAME READY UP-TO-DATE AVAILABLE AGE external-dns-operator 1/1 1 1 23h",
"oc logs -n external-dns-operator deployment/external-dns-operator -c external-dns-operator",
"time=\"2022-09-02T08:53:57Z\" level=error msg=\"Failure in zone test.example.io. [Id: /hostedzone/Z06988883Q0H0RL6UMXXX]\" time=\"2022-09-02T08:53:57Z\" level=error msg=\"InvalidChangeBatch: [FATAL problem: DomainLabelTooLong (Domain label is too long) encountered with 'external-dns-a-hello-openshift-aaaaaaaaaa-bbbbbbbbbb-ccccccc']\\n\\tstatus code: 400, request id: e54dfd5a-06c6-47b0-bcb9-a4f7c3a4e0c6\"",
"apiVersion: v1 kind: Namespace metadata: name: external-dns-operator",
"oc apply -f namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: external-dns-operator namespace: external-dns-operator spec: upgradeStrategy: Default targetNamespaces: - external-dns-operator",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: external-dns-operator namespace: external-dns-operator spec: channel: stable-v1 installPlanApproval: Automatic name: external-dns-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc apply -f subscription.yaml",
"oc -n external-dns-operator get subscription external-dns-operator --template='{{.status.installplan.name}}{{\"\\n\"}}'",
"oc -n external-dns-operator get ip <install_plan_name> --template='{{.status.phase}}{{\"\\n\"}}'",
"oc -n external-dns-operator get pod",
"NAME READY STATUS RESTARTS AGE external-dns-operator-5584585fd7-5lwqm 2/2 Running 0 11m",
"oc -n external-dns-operator get subscription",
"NAME PACKAGE SOURCE CHANNEL external-dns-operator external-dns-operator redhat-operators stable-v1",
"oc -n external-dns-operator get csv",
"NAME DISPLAY VERSION REPLACES PHASE external-dns-operator.v<1.y.z> ExternalDNS Operator <1.y.z> Succeeded",
"spec: provider: type: AWS 1 aws: credentials: name: aws-access-key 2",
"zones: - \"myzoneid\" 1",
"domains: - filterType: Include 1 matchType: Exact 2 name: \"myzonedomain1.com\" 3 - filterType: Include matchType: Pattern 4 pattern: \".*\\\\.otherzonedomain\\\\.com\" 5",
"source: 1 type: Service 2 service: serviceType: 3 - LoadBalancer - ClusterIP labelFilter: 4 matchLabels: external-dns.mydomain.org/publish: \"yes\" hostnameAnnotation: \"Allow\" 5 fqdnTemplate: - \"{{.Name}}.myzonedomain.com\" 6",
"source: type: OpenShiftRoute 1 openshiftRouteOptions: routerName: default 2 labelFilter: matchLabels: external-dns.mydomain.org/publish: \"yes\"",
"oc whoami",
"system:admin",
"export AWS_ACCESS_KEY_ID=USD(oc get secrets aws-creds -n kube-system --template={{.data.aws_access_key_id}} | base64 -d) export AWS_SECRET_ACCESS_KEY=USD(oc get secrets aws-creds -n kube-system --template={{.data.aws_secret_access_key}} | base64 -d)",
"oc get routes --all-namespaces | grep console",
"openshift-console console console-openshift-console.apps.testextdnsoperator.apacshift.support console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.testextdnsoperator.apacshift.support downloads http edge/Redirect None",
"aws route53 list-hosted-zones | grep testextdnsoperator.apacshift.support",
"HOSTEDZONES terraform /hostedzone/Z02355203TNN1XXXX1J6O testextdnsoperator.apacshift.support. 5",
"cat <<EOF | oc create -f - apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-aws 1 spec: domains: - filterType: Include 2 matchType: Exact 3 name: testextdnsoperator.apacshift.support 4 provider: type: AWS 5 source: 6 type: OpenShiftRoute 7 openshiftRouteOptions: routerName: default 8 EOF",
"aws route53 list-resource-record-sets --hosted-zone-id Z02355203TNN1XXXX1J6O --query \"ResourceRecordSets[?Type == 'CNAME']\" | grep console",
"aws --profile account-a iam get-role --role-name user-rol1 | head -1",
"ROLE arn:aws:iam::1234567890123:role/user-rol1 2023-09-14T17:21:54+00:00 3600 / AROA3SGB2ZRKRT5NISNJN user-rol1",
"aws --profile account-a route53 list-hosted-zones | grep testextdnsoperator.apacshift.support",
"HOSTEDZONES terraform /hostedzone/Z02355203TNN1XXXX1J6O testextdnsoperator.apacshift.support. 5",
"cat <<EOF | oc create -f - apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-aws spec: domains: - filterType: Include matchType: Exact name: testextdnsoperator.apacshift.support provider: type: AWS aws: assumeRole: arn: arn:aws:iam::12345678901234:role/user-rol1 1 source: type: OpenShiftRoute openshiftRouteOptions: routerName: default EOF",
"aws --profile account-a route53 list-resource-record-sets --hosted-zone-id Z02355203TNN1XXXX1J6O --query \"ResourceRecordSets[?Type == 'CNAME']\" | grep console-openshift-console",
"CLIENT_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_id}} | base64 -d) CLIENT_SECRET=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_secret}} | base64 -d) RESOURCE_GROUP=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_resourcegroup}} | base64 -d) SUBSCRIPTION_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_subscription_id}} | base64 -d) TENANT_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_tenant_id}} | base64 -d)",
"az login --service-principal -u \"USD{CLIENT_ID}\" -p \"USD{CLIENT_SECRET}\" --tenant \"USD{TENANT_ID}\"",
"oc get routes --all-namespaces | grep console",
"openshift-console console console-openshift-console.apps.test.azure.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.azure.example.com downloads http edge/Redirect None",
"az network dns zone list --resource-group \"USD{RESOURCE_GROUP}\"",
"apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-azure 1 spec: zones: - \"/subscriptions/1234567890/resourceGroups/test-azure-xxxxx-rg/providers/Microsoft.Network/dnszones/test.azure.example.com\" 2 provider: type: Azure 3 source: openshiftRouteOptions: 4 routerName: default 5 type: OpenShiftRoute 6",
"az network dns record-set list -g \"USD{RESOURCE_GROUP}\" -z test.azure.example.com | grep console",
"oc get secret gcp-credentials -n kube-system --template='{{USDv := index .data \"service_account.json\"}}{{USDv}}' | base64 -d - > decoded-gcloud.json",
"export GOOGLE_CREDENTIALS=decoded-gcloud.json",
"gcloud auth activate-service-account <client_email as per decoded-gcloud.json> --key-file=decoded-gcloud.json",
"gcloud config set project <project_id as per decoded-gcloud.json>",
"oc get routes --all-namespaces | grep console",
"openshift-console console console-openshift-console.apps.test.gcp.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.gcp.example.com downloads http edge/Redirect None",
"gcloud dns managed-zones list | grep test.gcp.example.com",
"qe-cvs4g-private-zone test.gcp.example.com",
"apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-gcp 1 spec: domains: - filterType: Include 2 matchType: Exact 3 name: test.gcp.example.com 4 provider: type: GCP 5 source: openshiftRouteOptions: 6 routerName: default 7 type: OpenShiftRoute 8",
"gcloud dns record-sets list --zone=qe-cvs4g-private-zone | grep console",
"oc -n external-dns-operator create secret generic infoblox-credentials --from-literal=EXTERNAL_DNS_INFOBLOX_WAPI_USERNAME=<infoblox_username> --from-literal=EXTERNAL_DNS_INFOBLOX_WAPI_PASSWORD=<infoblox_password>",
"oc get routes --all-namespaces | grep console",
"openshift-console console console-openshift-console.apps.test.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.example.com downloads http edge/Redirect None",
"apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-infoblox 1 spec: provider: type: Infoblox 2 infoblox: credentials: name: infoblox-credentials gridHost: USD{INFOBLOX_GRID_PUBLIC_IP} wapiPort: 443 wapiVersion: \"2.3.1\" domains: - filterType: Include matchType: Exact name: test.example.com source: type: OpenShiftRoute 3 openshiftRouteOptions: routerName: default 4",
"oc create -f external-dns-sample-infoblox.yaml",
"oc -n external-dns-operator create configmap trusted-ca",
"oc -n external-dns-operator label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true",
"oc -n external-dns-operator patch subscription external-dns-operator --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/config\", \"value\":{\"env\":[{\"name\":\"TRUSTED_CA_CONFIGMAP_NAME\",\"value\":\"trusted-ca\"}]}}]'",
"oc -n external-dns-operator exec deploy/external-dns-operator -c external-dns-operator -- printenv TRUSTED_CA_CONFIGMAP_NAME",
"trusted-ca"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/networking/external-dns-operator-1 |
Chapter 2. Accessing the web console | Chapter 2. Accessing the web console The OpenShift Container Platform web console is a user interface accessible from a web browser. Developers can use the web console to visualize, browse, and manage the contents of projects. 2.1. Prerequisites JavaScript must be enabled to use the web console. For the best experience, use a web browser that supports WebSockets . Review the OpenShift Container Platform 4.x Tested Integrations page before you create the supporting infrastructure for your cluster. 2.2. Understanding and accessing the web console The web console runs as a pod on the control plane node. The static assets required to run the web console are served by the pod. After you install OpenShift Container Platform using the openshift-install create cluster command, you can find the web console URL and login credentials for the installed cluster in the CLI output of the installation program. For example: Example output INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided> Use those details to log in and access the web console. For existing clusters that you did not install, you can use oc whoami --show-console to see the web console URL. Important The dir parameter specifies the assets directory, which stores the manifest files, the ISO image, and the auth directory. The auth directory stores the kubeadmin-password and kubeconfig files. As a kubeadmin user, you can use the kubeconfig file to access the cluster with the following setting: export KUBECONFIG=<install_directory>/auth/kubeconfig . The kubeconfig is specific to the generated ISO image, so if the kubeconfig is set and the oc command fails, it is possible that the system did not boot with the generated ISO image. To perform debugging, during the bootstrap process, you can log in to the console as the core user by using the contents of the kubeadmin-password file. Additional resources Enabling feature sets using the web console | [
"INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/web_console/web-console |
Configuring authentication and authorization in RHEL | Configuring authentication and authorization in RHEL Red Hat Enterprise Linux 8 Using SSSD, authselect, and sssctl to configure authentication and authorization Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_authentication_and_authorization_in_rhel/index |
Chapter 74. KafkaClientAuthenticationScramSha512 schema reference | Chapter 74. KafkaClientAuthenticationScramSha512 schema reference Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaClientAuthenticationScramSha512 schema properties To configure SASL-based SCRAM-SHA-512 authentication, set the type property to scram-sha-512 . The SCRAM-SHA-512 authentication mechanism requires a username and password. 74.1. username Specify the username in the username property. 74.2. passwordSecret In the passwordSecret property, specify a link to a Secret containing the password. You can use the secrets created by the User Operator. If required, you can create a text file that contains the password, in cleartext, to use for authentication: echo -n PASSWORD > MY-PASSWORD .txt You can then create a Secret from the text file, setting your own field name (key) for the password: oc create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt Example Secret for SCRAM-SHA-512 client authentication for Kafka Connect apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-connect-password-field: LFTIyFRFlMmU2N2Tm The secretName property contains the name of the Secret , and the password property contains the name of the key under which the password is stored inside the Secret . Important Do not specify the actual password in the password property. Example SASL-based SCRAM-SHA-512 client authentication configuration for Kafka Connect authentication: type: scram-sha-512 username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-connect-password-field 74.3. KafkaClientAuthenticationScramSha512 schema properties Property Description passwordSecret Reference to the Secret which holds the password. PasswordSecretSource type Must be scram-sha-512 . string username Username used for the authentication. string | [
"echo -n PASSWORD > MY-PASSWORD .txt",
"create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt",
"apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-connect-password-field: LFTIyFRFlMmU2N2Tm",
"authentication: type: scram-sha-512 username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-connect-password-field"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaClientAuthenticationScramSha512-reference |
19.2.3. Mail User Agent | 19.2.3. Mail User Agent A Mail User Agent ( MUA ) is synonymous with an email client application. An MUA is a program that, at a minimum, allows a user to read and compose email messages. Many MUAs are capable of retrieving messages via the POP or IMAP protocols, setting up mailboxes to store messages, and sending outbound messages to an MTA. MUAs may be graphical, such as Evolution , or have simple text-based interfaces, such as pine . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-email-types-mua |
Chapter 2. Installation | Chapter 2. Installation This chapter guides you through the steps to install AMQ Ruby in your environment. 2.1. Prerequisites You must have a subscription to access AMQ release files and repositories. To install packages on Red Hat Enterprise Linux, you must register your system . To use AMQ Ruby, you must install Ruby in your environment. 2.2. Installing on Red Hat Enterprise Linux Procedure Use the subscription-manager command to subscribe to the required package repositories. If necessary, replace <variant> with the value for your variant of Red Hat Enterprise Linux (for example, server or workstation ). Red Hat Enterprise Linux 7 USD sudo subscription-manager repos --enable=amq-clients-2-for-rhel-7- <variant> -rpms Red Hat Enterprise Linux 8 USD sudo subscription-manager repos --enable=amq-clients-2-for-rhel-8-x86_64-rpms Use the yum command to install the rubygem-qpid_proton and rubygem-qpid_proton-doc packages. USD sudo yum install rubygem-qpid_proton rubygem-qpid_proton-doc For more information about using packages, see Appendix B, Using Red Hat Enterprise Linux packages . | [
"sudo subscription-manager repos --enable=amq-clients-2-for-rhel-7- <variant> -rpms",
"sudo subscription-manager repos --enable=amq-clients-2-for-rhel-8-x86_64-rpms",
"sudo yum install rubygem-qpid_proton rubygem-qpid_proton-doc"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_ruby_client/installation |
16.17.2. Adding Partitions | 16.17.2. Adding Partitions To add a new partition, select the Create button. A dialog box appears (refer to Figure 16.40, "Creating a New Partition" ). Note You must dedicate at least one partition for this installation, and optionally more. For more information, refer to Appendix A, An Introduction to Disk Partitions . Figure 16.40. Creating a New Partition Mount Point : Enter the partition's mount point. For example, if this partition should be the root partition, enter / ; enter /boot for the /boot partition, and so on. You can also use the pull-down menu to choose the correct mount point for your partition. For a swap partition the mount point should not be set - setting the filesystem type to swap is sufficient. File System Type : Using the pull-down menu, select the appropriate file system type for this partition. For more information on file system types, refer to Section 16.17.2.1, "File System Types" . Allowable Drives : This field contains a list of the hard disks installed on your system. If a hard disk's box is highlighted, then a desired partition can be created on that hard disk. If the box is not checked, then the partition will never be created on that hard disk. By using different checkbox settings, you can have anaconda place partitions where you need them, or let anaconda decide where partitions should go. Size (MB) : Enter the size (in megabytes) of the partition. Note, this field starts with 200 MB; unless changed, only a 200 MB partition will be created. Additional Size Options : Choose whether to keep this partition at a fixed size, to allow it to "grow" (fill up the available hard drive space) to a certain point, or to allow it to grow to fill any remaining hard drive space available. If you choose Fill all space up to (MB) , you must give size constraints in the field to the right of this option. This allows you to keep a certain amount of space free on your hard drive for future use. Force to be a primary partition : Select whether the partition you are creating should be one of the first four partitions on the hard drive. If unselected, the partition is created as a logical partition. Refer to Section A.1.3, "Partitions Within Partitions - An Overview of Extended Partitions" , for more information. Encrypt : Choose whether to encrypt the partition so that the data stored on it cannot be accessed without a passphrase, even if the storage device is connected to another system. Refer to Appendix C, Disk Encryption for information on encryption of storage devices. If you select this option, the installer prompts you to provide a passphrase before it writes the partition to the disk. OK : Select OK once you are satisfied with the settings and wish to create the partition. Cancel : Select Cancel if you do not want to create the partition. 16.17.2.1. File System Types Red Hat Enterprise Linux allows you to create different partition types and file systems. The following is a brief description of the different partition types and file systems available, and how they can be used. Partition types standard partition - A standard partition can contain a file system or swap space, or it can provide a container for software RAID or an LVM physical volume. swap - Swap partitions are used to support virtual memory. In other words, data is written to a swap partition when there is not enough RAM to store the data your system is processing. Refer to the Red Hat Enterprise Linux Deployment Guide for additional information. software RAID - Creating two or more software RAID partitions allows you to create a RAID device. For more information regarding RAID, refer to the chapter RAID (Redundant Array of Independent Disks) in the Red Hat Enterprise Linux Deployment Guide . physical volume (LVM) - Creating one or more physical volume (LVM) partitions allows you to create an LVM logical volume. LVM can improve performance when using physical disks. For more information regarding LVM, refer to the Red Hat Enterprise Linux Deployment Guide . File systems ext4 - The ext4 file system is based on the ext3 file system and features a number of improvements. These include support for larger file systems and larger files, faster and more efficient allocation of disk space, no limit on the number of subdirectories within a directory, faster file system checking, and more robust journaling. A maximum file system size of 16TB is supported for ext4. The ext4 file system is selected by default and is highly recommended. Note The user_xattr and acl mount options are automatically set on ext4 systems by the installation system. These options enable extended attributes and access control lists, respectively. More information about mount options can be found in the Red Hat Enterprise Linux Storage Administration Guide . ext3 - The ext3 file system is based on the ext2 file system and has one main advantage - journaling. Using a journaling file system reduces time spent recovering a file system after a crash as there is no need to fsck [8] the file system. A maximum file system size of 16TB is supported for ext3. ext2 - An ext2 file system supports standard Unix file types (regular files, directories, symbolic links, etc). It provides the ability to assign long file names, up to 255 characters. xfs - XFS is a highly scalable, high-performance file system that supports filesystems up to 16 exabytes (approximately 16 million terabytes), files up to 8 exabytes (approximately 8 million terabytes) and directory structures containing tens of millions of entries. XFS supports metadata journaling, which facilitates quicker crash recovery. The XFS file system can also be defragmented and resized while mounted and active. Note The maximum size of an XFS partition the installer can create is 100 TB . vfat - The VFAT file system is a Linux file system that is compatible with Microsoft Windows long filenames on the FAT file system. Btrfs - Btrfs is under development as a file system capable of addressing and managing more files, larger files, and larger volumes than the ext2, ext3, and ext4 file systems. Btrfs is designed to make the file system tolerant of errors, and to facilitate the detection and repair of errors when they occur. It uses checksums to ensure the validity of data and metadata, and maintains snapshots of the file system that can be used for backup or repair. Because Btrfs is still experimental and under development, the installation program does not offer it by default. If you want to create a Btrfs partition on a drive, you must commence the installation process with the boot option btrfs . Refer to Chapter 28, Boot Options for instructions. Warning Red Hat Enterprise Linux 6.9 includes Btrfs as a technology preview to allow you to experiment with this file system. You should not choose Btrfs for partitions that will contain valuable data or that are essential for the operation of important systems. [8] The fsck application is used to check the file system for metadata consistency and optionally repair one or more Linux file systems. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/adding_partitions-ppc |
Chapter 8. Setting up Quarkus applications for HawtIO Online with Jolokia | Chapter 8. Setting up Quarkus applications for HawtIO Online with Jolokia This section describes the enabling of monitoring of a Quarkus application by HawtIO. It starts from first principles in setting up a simple example application. However, should a Quarkus application already have been implemented then skip to " Enabling Jolokia Java-Agent on the Example Quarkus Application ". For convenience, an example project based on this documentation has already been implemented and published here . Simply clone its parent repository and jump to " Deployment of the HawtIO-Enabled Quarkus Application to OpenShift ". Explanation of Hawtio Online Component Any interactions either from users or Hawtio are communicated with the HTTP protocol to an Nginx web server The Nginx web server is the outward-facing interface and the only sub-component visible to external consumers When a request is made, the Nginx web server hands off to the internal Gateway component, which serves 2 distinct purposes: Master-Guard Agent Any request directed towards the target Master Cluster API Server (OpenShift) must pass through this component where checks are made to ensure the requested endpoint URL is approved. URLs that are not approved, eg. requests to secrets or configmaps (potentially security sensitive), are rejected; Jolokia Agent Since pods reside on the Master Cluster, ultimately requests for Jolokia information from pods must also be protected and handled in a secure manner. This agent is responsible for converting a request from a client into the correct form for transmission to the target pod internally and passing the response back to the client. 8.1. Setting up an example Quarkus Application For a new Quarkus application, the maven quarkus quick-start is available, eg. mvn io.quarkus.platform:quarkus-maven-plugin:3.14.2:create \ -DprojectGroupId=org.hawtio \ -DprojectArtifactId=quarkus-helloworld \ -Dextensions='openshift,camel-quarkus-quartz' Use the quarkus-maven-plugin to generate the project scaffolding Set the project maven groupId to org.hawtio and customize as appropriate Set the project maven artifactId to quarkus-helloworld and customize as appropriate Use the following Quarkus extensions: openshift : Enables maven to deploy to local OpenShift cluster; camel-quarkus-quartz : Enables the Camel extension quartz for use in the example Quarkus application Execute the quick-start to create the scaffolding for the Quarkus project and then allow further customization for individual applications. To build and deploy the application to OpenShift, the following properties should be specified in the file src/main/resources/application.properties (see related documentation ). # Set the Docker build strategy quarkus.openshift.build-strategy=docker # Expose the service to create an OpenShift Container Platform route quarkus.openshift.route.expose=true 8.2. Implementing an Example Camel Quarkus Application For this example, a simple Camel 'hello-world' Quarkus application is to be implemented. Add the file src/main/java/org/hawtio/SampleCamelRoute.java to the project with the following content: package org.hawtio; import jakarta.enterprise.context.ApplicationScoped; import org.apache.camel.builder.endpoint.EndpointRouteBuilder; @ApplicationScoped public class SampleCamelRoute extends EndpointRouteBuilder { @Override public void configure() { from(quartz("cron").cron("{{quartz.cron}}")).routeId("cron") .setBody().constant("Hello Camel! - cron") .to(stream("out")) .to(mock("result")); from("quartz:simple?trigger.repeatInterval={{quartz.repeatInterval}}").routeId("simple") .setBody().constant("Hello Camel! - simple") .to(stream("out")) .to(mock("result")); } } This example logs "Hello Camel ... " entries in the container log via a Camel route. Modify the src/main/resources/application.properties file with the following properties: # Camel camel.context.name = SampleCamel # Uncomment the following to enable the Camel plugin Trace tab #camel.main.tracing = true #camel.main.backlogTracing = true #camel.main.useBreadcrumb = true # Uncomment to enable debugging of the application and in turn # enables the Camel plugin Debug tab even in non-development # environment #quarkus.camel.debug.enabled = true # Define properties for the Camel quartz component used in the # example quartz.cron = 0/10 * * * * ? quartz.repeatInterval = 10000 Add the following dependencies to the <dependencies> section of file pom.xml . These are required due to the route defined in src/main/java/org/hawtio/SampleCamelRoute.java ; these will need to be modified if the Camel route added to the application is changed: <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-stream</artifactId> </dependency> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-mock</artifactId> </dependency> 8.3. Enabling Jolokia Java-Agent on the Example Quarkus Application In order to ensure that maven properties can be passed through to the src/main/resources/application.properties file, the following should be added to the <build> section of the file pom.xml : <resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> </resource> </resources> Add the following Jolokia properties to the <properties> section of the file pom.xml . These will be used to configure the running jolokia java-agent in the Quarkus container (for an explanation of the properties, please refer to the Jolokia JVM Agent documentation): <properties> ... <!-- The current HawtIO Jolokia Version --> <jolokia-version>2.1.0</jolokia-version> <!-- =============================================================== === Jolokia agent configuration for the connection with HawtIO =============================================================== It should use HTTPS and SSL client authentication at minimum. The client principal should match those the HawtIO instance provides (the default is `hawtio-online.hawtio.svc`). --> <jolokia.protocol>https</jolokia.protocol> <jolokia.host>*</jolokia.host> <jolokia.port>8778</jolokia.port> <jolokia.useSslClientAuthentication>true</jolokia.useSslClientAuthentication> <jolokia.caCert>/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt</jolokia.caCert> <jolokia.clientPrincipal.1>cn=hawtio-online.hawtio.svc</jolokia.clientPrincipal.1> <jolokia.extendedClientCheck>true</jolokia.extendedClientCheck> <jolokia.discoveryEnabled>false</jolokia.discoveryEnabled> ... </properties> Add the following dependencies to the <dependencies> section of the file pom.xml : <!-- This dependency is required for enabling Camel management via JMX / HawtIO. --> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-management</artifactId> </dependency> <!-- This dependency is optional for monitoring with HawtIO but is required for HawtIO view the Camel routes source XML. --> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-jaxb</artifactId> </dependency> <!-- Add this optional dependency, to enable Camel plugin debugging feature. --> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-debug</artifactId> </dependency> <!-- This dependency is required to include the Jolokia agent jvm for access to JMX beans. --> <dependency> <groupId>org.jolokia</groupId> <artifactId>jolokia-agent-jvm</artifactId> <version>USD{jolokia-version}</version> <classifier>javaagent</classifier> </dependency> With maven property filtering implemented, the USD{jolokia... } environment variables should be passed-through from the pom.xml during the building of the application. The purpose of this property is to append a JVM option to the executing process of the container that runs the jolokia java-agent. Modify the src/main/resources/application.properties file with the following property: # Enable the jolokia java-agent on the quarkus application quarkus.openshift.env.vars.JAVA_OPTS_APPEND=-javaagent:lib/main/org.jolokia.jolokia-agent-jvm-USD{jolokia-version}-javaagent.jar=protocol=USD{jolokia.protocol}\,host=USD{jolokia.host}\,port=USD{jolokia.port}\,useSslClientAuthentication=USD{jolokia.useSslClientAuthentication}\,caCert=USD{jolokia.caCert}\,clientPrincipal.1=USD{jolokia.clientPrincipal.1}\,extendedClientCheck=USD{jolokia.extendedClientCheck}\,discoveryEnabled=USD{jolokia.discoveryEnabled} 8.4. Exposing the Jolokia Port from the Quarkus Container for Discovery by HawtIO For HawtIO to discover the deployed application, a port named jolokia must be present on the executing container. Therefore, it is necessary to add the following properties in the src/main/resources/application.properties file: # Define the Jolokia port on the container for HawtIO access quarkus.openshift.ports.jolokia.container-port=USD{jolokia.port} quarkus.openshift.ports.jolokia.protocol=TCP 8.5. Deployment of the HawtIO-Enabled Quarkus Application to OpenShift Pre-requsites : Command-line (CLI) is already logged-in to the OpenShift cluster and the project is selected. When all files have been configured, the following maven command can be executed: ./mvnw clean package -Dquarkus.kubernetes.deploy=true Verify that the Quarkus application is running correctly using the Verification steps detailed here . Assuming the application is running correctly, the new Quarkus application should be discovered by an HawtIO instance (depending on its mode - 'Namespace' mode requires it to be in the same project). The new container should be displayed like in the following screenshot: By clicking Connect, the Quarkus application can be examined by HawtIO. See also : Deploying Quarkus application to OpenShift A Quarkus application: Getting started Using the Quarkus Openshift Extension | [
"mvn io.quarkus.platform:quarkus-maven-plugin:3.14.2:create -DprojectGroupId=org.hawtio -DprojectArtifactId=quarkus-helloworld -Dextensions='openshift,camel-quarkus-quartz'",
"Set the Docker build strategy quarkus.openshift.build-strategy=docker # Expose the service to create an OpenShift Container Platform route quarkus.openshift.route.expose=true",
"package org.hawtio; import jakarta.enterprise.context.ApplicationScoped; import org.apache.camel.builder.endpoint.EndpointRouteBuilder; @ApplicationScoped public class SampleCamelRoute extends EndpointRouteBuilder { @Override public void configure() { from(quartz(\"cron\").cron(\"{{quartz.cron}}\")).routeId(\"cron\") .setBody().constant(\"Hello Camel! - cron\") .to(stream(\"out\")) .to(mock(\"result\")); from(\"quartz:simple?trigger.repeatInterval={{quartz.repeatInterval}}\").routeId(\"simple\") .setBody().constant(\"Hello Camel! - simple\") .to(stream(\"out\")) .to(mock(\"result\")); } }",
"Camel camel.context.name = SampleCamel # Uncomment the following to enable the Camel plugin Trace tab #camel.main.tracing = true #camel.main.backlogTracing = true #camel.main.useBreadcrumb = true # Uncomment to enable debugging of the application and in turn # enables the Camel plugin Debug tab even in non-development # environment #quarkus.camel.debug.enabled = true # Define properties for the Camel quartz component used in the # example quartz.cron = 0/10 * * * * ? quartz.repeatInterval = 10000",
"<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-stream</artifactId> </dependency> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-mock</artifactId> </dependency>",
"<resources> <resource> <directory>src/main/resources</directory> <filtering>true</filtering> </resource> </resources>",
"<properties> <!-- The current HawtIO Jolokia Version --> <jolokia-version>2.1.0</jolokia-version> <!-- =============================================================== === Jolokia agent configuration for the connection with HawtIO =============================================================== It should use HTTPS and SSL client authentication at minimum. The client principal should match those the HawtIO instance provides (the default is `hawtio-online.hawtio.svc`). --> <jolokia.protocol>https</jolokia.protocol> <jolokia.host>*</jolokia.host> <jolokia.port>8778</jolokia.port> <jolokia.useSslClientAuthentication>true</jolokia.useSslClientAuthentication> <jolokia.caCert>/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt</jolokia.caCert> <jolokia.clientPrincipal.1>cn=hawtio-online.hawtio.svc</jolokia.clientPrincipal.1> <jolokia.extendedClientCheck>true</jolokia.extendedClientCheck> <jolokia.discoveryEnabled>false</jolokia.discoveryEnabled> </properties>",
"<!-- This dependency is required for enabling Camel management via JMX / HawtIO. --> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-management</artifactId> </dependency> <!-- This dependency is optional for monitoring with HawtIO but is required for HawtIO view the Camel routes source XML. --> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-jaxb</artifactId> </dependency> <!-- Add this optional dependency, to enable Camel plugin debugging feature. --> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-debug</artifactId> </dependency> <!-- This dependency is required to include the Jolokia agent jvm for access to JMX beans. --> <dependency> <groupId>org.jolokia</groupId> <artifactId>jolokia-agent-jvm</artifactId> <version>USD{jolokia-version}</version> <classifier>javaagent</classifier> </dependency>",
"Enable the jolokia java-agent on the quarkus application quarkus.openshift.env.vars.JAVA_OPTS_APPEND=-javaagent:lib/main/org.jolokia.jolokia-agent-jvm-USD{jolokia-version}-javaagent.jar=protocol=USD{jolokia.protocol}\\,host=USD{jolokia.host}\\,port=USD{jolokia.port}\\,useSslClientAuthentication=USD{jolokia.useSslClientAuthentication}\\,caCert=USD{jolokia.caCert}\\,clientPrincipal.1=USD{jolokia.clientPrincipal.1}\\,extendedClientCheck=USD{jolokia.extendedClientCheck}\\,discoveryEnabled=USD{jolokia.discoveryEnabled}",
"Define the Jolokia port on the container for HawtIO access quarkus.openshift.ports.jolokia.container-port=USD{jolokia.port} quarkus.openshift.ports.jolokia.protocol=TCP",
"./mvnw clean package -Dquarkus.kubernetes.deploy=true"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/hawtio_diagnostic_console_guide/setting-applications-for-hawtio-online |
Chapter 4. Next steps | Chapter 4. steps This document shows how to configure and deploy a proof-of-concept version of Red Hat Quay. For more information on deploying to a production environment, see the guide "Deploy Red Hat Quay - High Availability". The "Use Red Hat Quay" guide shows you how to: Add users and repositories Use tags Automatically build Dockerfiles with build workers Set up build triggers Add notifications for repository events The "Manage Red Hat Quay" guide shows you how to: Use SSL and TLS Enable security scanning with Clair Use repository mirroring Configure LDAP authentication Use georeplication of storage | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/deploy_red_hat_quay_for_proof-of-concept_non-production_purposes/next_steps |
macro::json_output_array_numeric_value | macro::json_output_array_numeric_value Name macro::json_output_array_numeric_value - Output a numeric value for metric in an array. Synopsis Arguments array_name The name of the array. array_index The array index (as a string) indicating where to store the numeric value. metric_name The name of the numeric metric. value The numeric value to output. Description The json_output_array_numeric_value macro is designed to be called from the 'json_data' probe in the user's script to output a metric's numeric value that is in an array. This metric should have been added with json_add_array_numeric_metric . | [
"@json_output_array_numeric_value(array_name,array_index,metric_name,value)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-json-output-array-numeric-value |
Chapter 2. Managing project networks | Chapter 2. Managing project networks Project networks help you to isolate network traffic for cloud computing. You can create IPv4 or IPv6 networks and subnets, and use routers to connect them. This section contains the following topics: Section 2.1, "Creating a network" Section 2.2, "Creating a subnet" Section 2.3, "IPv6 subnet options" Section 2.4, "Create an IPv6 subnet using Stateful DHCPv6" Section 2.5, "DVR overview" Section 2.6, "Adding a router" Section 2.7, "Adding a subnet to a router" Section 2.8, "Removing a subnet from a router" Section 2.9, "Purging all resources and deleting a project" Section 2.10, "Deleting a router" Section 2.11, "Deleting a subnet" Section 2.12, "Deleting a network" 2.1. Creating a network Create a network so that instances in your Red Hat OpenStack Services on OpenShift (RHOSO) can communicate with each other and receive IP addresses using DHCP. For more information about external network connections, see Bridging the physical network in Configuring networking services . Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Create a network. Example USD openstack network create my_network Tip If you do not intend to enter the network into production immediately, you can add the --disable option to the create network command. Consider naming your network for the role that the network provides. One example: webservers_122 is a network that host web servers and the VLAN ID is 122 . Verification Confirm that the network was added: USD openstack network list -c Name Sample output +-------------+ | Name | +-------------+ | my_network | | private | | public | | lb-mgmt-net | +-------------+ steps If you need to modify the network, use the openstack network set command. Most networks need at least one subnet. Proceed to Section 2.2, "Creating a subnet" . Additional resources network create in Command line interface reference network set in Command line interface reference 2.2. Creating a subnet Red Hat OpenStack Services on OpenShift (RHOSO) environments rely on subnets to logically divide a network as a means to provide efficiency and isolation. VM instances using the same subnet can communicate without passing data traffic through a router. Locating systems that require a high volume of traffic between them in the same subnet, avoids the subsequent latency and load. By using different subnets for different purposes, you can isolate data traffic. For example, web server traffic might use one subnet, and database traffic might use another subnet. Instances that want to communicate with instances in another subnet must direct their data traffic through a router. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Obtain the name of the network to which you want to add a subnet: USD openstack network list Create a subnet. USD openstack subnet create <subnet-name> --subnet-range <CIDR> --network <name> Replace <subnet-name> with the name of the subnet you are creating. Replace <CIDR> with the IP address range and subnet mask in CIDR format. To determine the CIDR address, calculate the number of bits masked in the subnet mask and append that value to the IP address range. For example, the subnet mask 255.255.255.0 has 24 masked bits. To use this mask with the IPv4 address range 192.168.122.0, specify the address 192.168.122.0/24. Replace <name> with the name of the network to which you are adding the subnet. Tip DHCP services automate the distribution of IP settings to your instances and is enabled by default. Specify --no-dhcp to diable DHCP. Verification Confirm that the subnet was added: USD openstack subnet list -c Name -c Network -c Subnet --max-width=72 Sample output +----------------------+-------------------------+---------------------+ | Name | Network | Subnet | +----------------------+-------------------------+---------------------+ | private_subnet24 | 317be3d3-5265-43f7-b52b | 10.0.24.0/24 | | | -930e3fd19b8b | | | lb-mgmt-subnet | c4588d49-9151-414b-8832 | 172.24.0.0/16 | | | -37313e3b4c57 | | | external_subnet | bcdb3cc0-8c0b-4d2d-813c | 10.0.0.0/24 | | | -e141bb97aa8f | | | external_ipv6_subnet | bcdb3cc0-8c0b-4d2d-813c | 2620:52:0:13b8::/64 | | | -e141bb97aa8f | | | private_subnet | 317be3d3-5265-43f7-b52b | 10.0.1.0/24 | | | -930e3fd19b8b | | +----------------------+-------------------------+---------------------+ When you create instances, you can configure them now to use its subnet, and they receive any specified DHCP options. steps If you need to modify the subnet, use the openstack subnet set command. Additional resources subnet create in Command line interface reference subnet set in Command line interface reference 2.3. IPv6 subnet options In Red Hat OpenStack Services on OpenShift (RHOSO) environments, when you create IPv6 subnets in a project network you can specify address mode and router advertisement mode to obtain a particular result as described in the following table. Note RHOSO does not support IPv6 prefix delegation from an external entity. You must obtain the Global Unicast Address (GUA) prefix from your external prefix delegation router and set it by using the subnet-range argument during creation of a IPv6 subnet. Example USD openstack subnet create --ip-version 6\ --subnet-range 2002:c000:200::64 \ --no-dhcp \ --gateway 2002:c000:2fe:: \ --dns-nameserver 2002:c000:2fe:: \ --network provider \ provider-subnet-2002:c000:200:: Sample output +----------------------+--------------------------------------+ | Field | Value | +----------------------+--------------------------------------+ | allocation_pools | 2002:c000:200::64-2002:c000:200::64 | | cidr | 2002:c000:200::64/128 | | created_at | 2024-09-24T19:30:07Z | | description | | | dns_nameservers | 2002:c000:2fe:: | | dns_publish_fixed_ip | None | | enable_dhcp | False | | gateway_ip | 2002:c000:200::64 | | host_routes | | | id | 49dda67d-814e-457b-b14b-77ef32935c0f | | ip_version | 6 | | ipv6_address_mode | None | | ipv6_ra_mode | None | | name | provider-subnet-2002:c000:200:: | | network_id | bcdb3cc0-8c0b-4d2d-813c-e141bb97aa8f | | prefix_length | None | | project_id | 24089d2fe1a94dd29ca2f665794fbe92 | | revision_number | 0 | | segment_id | None | | service_types | None | | subnetpool_id | None | | tags | | | updated_at | 2024-09-24T19:30:07Z | +----------------------+--------------------------------------+ RA Mode Address Mode Result ipv6_ra_mode=not set ipv6-address-mode=slaac The instance receives an IPv6 address from the external router (not managed by OpenStack Networking) using stateless address autoconfiguration (SLAAC). Note The RHOSO Networking service (neutron) supports only EUI-64 IPv6 address assignment for SLAAC. This allows for simplified IPv6 networking, as hosts self-assign addresses based on the base 64-bits plus the MAC address. You cannot create subnets with a different netmask and address_assign_type of SLAAC. ipv6_ra_mode=not set ipv6-address-mode=dhcpv6-stateful The instance receives an IPv6 address and optional information from the Networking service (dnsmasq) using DHCPv6 stateful . ipv6_ra_mode=not set ipv6-address-mode=dhcpv6-stateless The instance receives an IPv6 address from the external router using SLAAC, and optional information from the Networking service (dnsmasq) using DHCPv6 stateless . ipv6_ra_mode=slaac ipv6-address-mode=not-set The instance uses SLAAC to receive an IPv6 address from the Networking service ( radvd ). ipv6_ra_mode=dhcpv6-stateful ipv6-address-mode=not-set The instance receives an IPv6 address and optional information from an external DHCPv6 server using DHCPv6 stateful . ipv6_ra_mode=dhcpv6-stateless ipv6-address-mode=not-set The instance receives an IPv6 address from the Networking service ( radvd ) using SLAAC, and optional information from an external DHCPv6 server using DHCPv6 stateless . ipv6_ra_mode=slaac ipv6-address-mode=slaac The instance receives an IPv6 address from the Networking service ( radvd ) using SLAAC . ipv6_ra_mode=dhcpv6-stateful ipv6-address-mode=dhcpv6-stateful The instance receives an IPv6 address from OpenStack Networking ( dnsmasq ) using DHCPv6 stateful , and optional information from OpenStack Networking ( dnsmasq ) using DHCPv6 stateful . ipv6_ra_mode=dhcpv6-stateless ipv6-address-mode=dhcpv6-stateless The instance receives an IPv6 address from OpenStack Networking ( radvd ) using SLAAC , and optional information from OpenStack Networking ( dnsmasq ) using DHCPv6 stateless . Additional resources Create an IPv6 subnet using Stateful DHCPv6 2.4. Create an IPv6 subnet using Stateful DHCPv6 In Red Hat OpenStack Services on OpenShift (RHOSO) environments, you can create an IPv6 subnet in a Red Hat OpenStack (RHOSP) project network. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Obtain the project ID of the project where you want to create the IPv6 subnet. Retain this ID, because you will need it later: USD openstack project list Sample output +----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 25837c567ed5458fbb441d39862e1399 | QA | | f59f631a77264a8eb0defc898cb836af | admin | | 4e2e1951e70643b5af7ed52f3ff36539 | demo | | 8561dff8310e4cd8be4b6fd03dc8acf5 | services | +----------------------------------+----------+ Obtain the name of the network where you want to host the IPv6 subnet. Retain this name, because you will need it later: USD openstack network list -c Name -c Subnets --max-width=72 Sample output +-------------+--------------------------------------------------------+ | Name | Subnets | +-------------+--------------------------------------------------------+ | private | 47d34cf0-0dd2-49bd-a985-67311d80c5c4, | | | 82014d36-9e60-43eb-92fc-74674573f4e8, | | | d7535565-113f-4192-baa6-da21f301f141 | | private2 | 7ee56cef-83c0-40d1-b4e7-5287dae1c23c | | public | 6745edd4-d15f-4971-89bf-70307b0ad2f1, | | | cc3f81bb-4d55-4ead-aad4-5362a7ca5b04 | | lb-mgmt-net | 5ca08724-568c-4030-93eb-f2e286570a25 | +-------------+--------------------------------------------------------+ Using the project ID and network name, create an IPv6 subnet: Example USD openstack subnet create --ip-version 6 --ipv6-address-mode \ dhcpv6-stateful --project 25837c567ed5458fbb441d39862e1399 \ --network private2 --subnet-range fdf8:f53b:82e4::53/125 \ subnet_name Sample output +-------------------+--------------------------------------------------------------+ | Field | Value | +-------------------+--------------------------------------------------------------+ | allocation_pools | {"start": "fdf8:f53b:82e4::52", "end": "fdf8:f53b:82e4::56"} | | cidr | fdf8:f53b:82e4::53/125 | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | fdf8:f53b:82e4::51 | | host_routes | | | id | cdfc3398-997b-46eb-9db1-ebbd88f7de05 | | ip_version | 6 | | ipv6_address_mode | dhcpv6-stateful | | ipv6_ra_mode | | | name | | | network_id | 6aff6826-4278-4a35-b74d-b0ca0cbba340 | | tenant_id | 25837c567ed5458fbb441d39862e1399 | +-------------------+--------------------------------------------------------------+ Verification Validate this configuration by reviewing the network list. USD openstack network list -c Name -c Subnets --max-width=72 Sample output Note that the entry for private2 now reflects the newly created IPv6 subnet: Create an instance, and confirm that the instance is associated with a DHCP IPv6 address when added to the private2 subnet: USD openstack server list -c Name -c Status -c Networks Sample output +---------+--------+-----------------------------+ | Name | Status | Networks | +---------+--------+-----------------------------+ | server1 | ACTIVE | private2=fdf8:f53b:82e4::52 | +---------+--------+-----------------------------+ Additional resources IPv6 subnet options subnet create in Command line interface reference 2.5. DVR overview In Red Hat OpenStack Services on OpenShift (RHOSO) environments, distributed virtual routing (DVR) isolates the failure domain of the control plane and optimizes network traffic by using layer 3 high availability (L3 HA) and scheduling routers on every Compute node. DVR has these characteristics: East-West traffic is routed directly on the Compute nodes in a distributed fashion. North-South traffic that uses floating IP addresses is distributed and routed on the Compute nodes. This requires the external network to be connected to every Compute node. North-South traffic without floating IP addresses (SNAT traffic) is not distributed. The OVN metadata agent is distributed and deployed on all Compute nodes. The metadata proxy service is hosted on all the distributed routers. 2.6. Adding a router In Red Hat OpenStack Services on OpenShift (RHOSO) environments, the Networking service (neutron) provides routing services using an SDN-based virtual router. Routers are a requirement for your instances to communicate with external subnets, including those in the physical network. Routers and subnets connect using interfaces, with each subnet requiring its own interface to the router. The default gateway of a router defines the hop for any traffic received by the router. Its network is typically configured to route traffic to the external physical network using a virtual bridge. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Create a router. USD openstack router create <router-name> --external-gateway <network> \ --project <name> Replace <router-name> with the name of the router you are creating. Replace <network> with the name of an external network. Replace <name> with the name of your project. Tip If you want to create a router in an availability zone (AZ) add the --availability-zone-hint <availability-zone> option. Repeat the option to set multiple AZs. Verification Confirm that the router was created: USD openstack router list -c Name -c Status -c State -c Project Sample output +---------+--------+-------+----------------------------------+ | Name | Status | State | Project | +---------+--------+-------+----------------------------------+ | router1 | ACTIVE | UP | 24089d2fe1a94dd29ca2f665794fbe92 | | router2 | ACTIVE | UP | ecf285621c509223ade3358691bbde59 | +---------+--------+-------+----------------------------------+ steps Proceed to Adding a subnet to a router . Additional resources subnet create in Command line interface reference subnet set in Command line interface reference 2.7. Adding a subnet to a router In Red Hat OpenStack Services on OpenShift (RHOSO) environments, to allow traffic to be routed to and from your network, you must add an interface to the subnet to an existing virtual router. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. You have access to a virtual router. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Obtain the virtual router name that you are using to route traffic to and from your network. Retain this name, because you will need it later: USD openstack router list Sample output +---------+--------+-------+----------------------------------+ | Name | Status | State | Project | +---------+--------+-------+----------------------------------+ | router1 | ACTIVE | UP | 24089d2fe1a94dd29ca2f665794fbe92 | | router2 | ACTIVE | UP | ecf285621c509223ade3358691bbde59 | +---------+--------+-------+----------------------------------+ Obtain the name of the subnet for which you want to add an interface for your router. Retain this name, because you will need it later: USD openstack subnet list -c Name -c Network -c Subnet --max-width=72 Sample output +----------------------+-------------------------+---------------------+ | Name | Network | Subnet | +----------------------+-------------------------+---------------------+ | private_subnet24 | 317be3d3-5265-43f7-b52b | 10.0.24.0/24 | | | -930e3fd19b8b | | | lb-mgmt-subnet | c4588d49-9151-414b-8832 | 172.24.0.0/16 | | | -37313e3b4c57 | | | external_subnet | bcdb3cc0-8c0b-4d2d-813c | 10.0.0.0/24 | | | -e141bb97aa8f | | | external_ipv6_subnet | bcdb3cc0-8c0b-4d2d-813c | 2620:52:0:13b8::/64 | | | -e141bb97aa8f | | | private_subnet | 317be3d3-5265-43f7-b52b | 10.0.1.0/24 | | | -930e3fd19b8b | | +----------------------+-------------------------+---------------------+ Using the router and subnet names, add an interface to your router for the subnet. Example In this example, private_subnet is added as an interface on router2 : USD openstack router add subnet router2 private_subnet Verification Confirm that the router lists the interface that was added: Example USD openstack router show router2 --max-width=72 Sample output Additional resources Adding a router 2.8. Removing a subnet from a router In Red Hat OpenStack Services on OpenShift (RHOSO) environments, if you no longer require a router to direct network traffic to a subnet, you can remove the corresponding interface from the router. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. You have access to a virtual router. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Obtain the virtual router name that contains the interface to a subnet that you want to remove. Retain this name, because you will need it later: USD openstack router list Sample output +---------+--------+-------+----------------------------------+ | Name | Status | State | Project | +---------+--------+-------+----------------------------------+ | router1 | ACTIVE | UP | 24089d2fe1a94dd29ca2f665794fbe92 | | router2 | ACTIVE | UP | ecf285621c509223ade3358691bbde59 | +---------+--------+-------+----------------------------------+ Obtain the name of the subnet that you are using as the interface for your router. Retain this name, because you will need it later: USD openstack subnet list -c Name -c Network -c Subnet --max-width=72 Sample output +----------------------+-------------------------+---------------------+ | Name | Network | Subnet | +----------------------+-------------------------+---------------------+ | private_subnet24 | 317be3d3-5265-43f7-b52b | 10.0.24.0/24 | | | -930e3fd19b8b | | | lb-mgmt-subnet | c4588d49-9151-414b-8832 | 172.24.0.0/16 | | | -37313e3b4c57 | | | external_subnet | bcdb3cc0-8c0b-4d2d-813c | 10.0.0.0/24 | | | -e141bb97aa8f | | | external_ipv6_subnet | bcdb3cc0-8c0b-4d2d-813c | 2620:52:0:13b8::/64 | | | -e141bb97aa8f | | | private_subnet | 317be3d3-5265-43f7-b52b | 10.0.1.0/24 | | | -930e3fd19b8b | | +----------------------+-------------------------+---------------------+ Using the router and subnet names, remove the interface from your router. Example In this example, private_subnet is removed as an interface on router2 : USD openstack router remove subnet router2 private_subnet Verification Confirm that the subnet has been removed from the router: Example USD openstack router show router2 --max-width=72 Sample output Additional resources router remove subnet in Command line interface reference 2.9. Purging all resources and deleting a project In Red Hat OpenStack Services on OpenShift (RHOSO) environments, you can delete all resources that belong to a particular project as well as deleting the project, too. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Using the project name, delete the project and all of the resources associated with the project: Example In this example, test-project is deleted: USD openstack project purge --project test-project Verification Confirm that the project has been deleted: USD openstack project list Sample output +----------------------------------+--------------+ | ID | Name | +----------------------------------+--------------+ | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+--------------+ Additional resources project purge in Command line interface reference 2.10. Deleting a router In Red Hat OpenStack Services on OpenShift (RHOSO) environments, you can delete a router if it has no connected interfaces. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. No interfaces are connected to the router that you want to delete. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Obtain the name of the router that you want to delete. Retain this name, because you will need it later. USD openstack router list -c Name -c Status -c State -c Project Sample output +---------+--------+-------+----------------------------------+ | Name | Status | State | Project | +---------+--------+-------+----------------------------------+ | router1 | ACTIVE | UP | 24089d2fe1a94dd29ca2f665794fbe92 | | router2 | ACTIVE | UP | ecf285621c509223ade3358691bbde59 | +---------+--------+-------+----------------------------------+ Using the router name, delete the router. Example In this example, router2 is deleted: USD openstack router delete router2 Verification Confirm that the router was deleted: USD openstack router list -c Name -c Status -c State -c Project Sample output +---------+--------+-------+----------------------------------+ | Name | Status | State | Project | +---------+--------+-------+----------------------------------+ | router1 | ACTIVE | UP | 24089d2fe1a94dd29ca2f665794fbe92 | +---------+--------+-------+----------------------------------+ Additional resources router delete in Command line interface reference 2.11. Deleting a subnet In Red Hat OpenStack Services on OpenShift (RHOSO) environments, you can delete a subnet if it is no longer in use. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. No instances are configured to use the subnet that you want to delete. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Obtain the name of the subnet that you want to delete. Retain this name, because you will need it later. USD openstack subnet list Sample output +----------------------+-------------------------+---------------------+ | Name | Network | Subnet | +----------------------+-------------------------+---------------------+ | private_subnet24 | 317be3d3-5265-43f7-b52b | 10.0.24.0/24 | | | -930e3fd19b8b | | | lb-mgmt-subnet | c4588d49-9151-414b-8832 | 172.24.0.0/16 | | | -37313e3b4c57 | | | external_subnet | bcdb3cc0-8c0b-4d2d-813c | 10.0.0.0/24 | | | -e141bb97aa8f | | | private2_subnet | 56e73380-a771-408f-bdc0 | 10.1.2.0/24 | | | -1e97f79677c6 | | | private_subnet2 | 317be3d3-5265-43f7-b52b | 10.0.2.0/24 | | | -930e3fd19b8b | | | external_ipv6_subnet | bcdb3cc0-8c0b-4d2d-813c | 2620:52:0:13b8::/64 | | | -e141bb97aa8f | | | private_subnet | 317be3d3-5265-43f7-b52b | 10.0.1.0/24 | | | -930e3fd19b8b | | +----------------------+-------------------------+---------------------+ Using the subnet name, delete the subnet. Example In this example, private_subnet24 is deleted: USD openstack subnet delete private_subnet24 Verification Confirm that the subnet was deleted: USD openstack subnet list -c Name -c Network -c Subnet --max-width=72 Sample output +----------------------+-------------------------+---------------------+ | Name | Network | Subnet | +----------------------+-------------------------+---------------------+ | lb-mgmt-subnet | c4588d49-9151-414b-8832 | 172.24.0.0/16 | | | -37313e3b4c57 | | | external_subnet | bcdb3cc0-8c0b-4d2d-813c | 10.0.0.0/24 | | | -e141bb97aa8f | | | private2_subnet | 56e73380-a771-408f-bdc0 | 10.1.2.0/24 | | | -1e97f79677c6 | | | private_subnet2 | 317be3d3-5265-43f7-b52b | 10.0.2.0/24 | | | -930e3fd19b8b | | | external_ipv6_subnet | bcdb3cc0-8c0b-4d2d-813c | 2620:52:0:13b8::/64 | | | -e141bb97aa8f | | | private_subnet | 317be3d3-5265-43f7-b52b | 10.0.1.0/24 | | | -930e3fd19b8b | | +----------------------+-------------------------+---------------------+ Additional resources subnet create in Command line interface reference subnet set in Command line interface reference 2.12. Deleting a network In Red Hat OpenStack Services on OpenShift (RHOSO) environments, there are occasions where it becomes necessary to delete a network that was previously created, perhaps as housekeeping or as part of a decommissioning process. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. No interfaces are connected to the network that you want to delete. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Obtain the name of the network that you want to delete. Retain this name, because you will need it later. USD openstack network list -c Name Sample output +-------------+ | Name | +-------------+ | my_network | | private | | public | | lb-mgmt-net | +-------------+ Using the network name, delete the network. Example In this example, my_network is deleted: USD openstack network delete my_network Verification Confirm that the network was deleted: USD openstack network list -c Name Sample output +-------------+ | Name | +-------------+ | private | | public | | lb-mgmt-net | +-------------+ Additional resources link:https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/network#network_delete [network delete] in Command line interface reference | [
"dnf list installed python-openstackclient",
"echo USDOS_CLOUD my_cloud",
"export OS_CLOUD=my_other_cloud",
"openstack network create my_network",
"openstack network list -c Name",
"+-------------+ | Name | +-------------+ | my_network | | private | | public | | lb-mgmt-net | +-------------+",
"dnf list installed python-openstackclient",
"echo USDOS_CLOUD my_cloud",
"export OS_CLOUD=my_other_cloud",
"openstack network list",
"openstack subnet create <subnet-name> --subnet-range <CIDR> --network <name>",
"openstack subnet list -c Name -c Network -c Subnet --max-width=72",
"+----------------------+-------------------------+---------------------+ | Name | Network | Subnet | +----------------------+-------------------------+---------------------+ | private_subnet24 | 317be3d3-5265-43f7-b52b | 10.0.24.0/24 | | | -930e3fd19b8b | | | lb-mgmt-subnet | c4588d49-9151-414b-8832 | 172.24.0.0/16 | | | -37313e3b4c57 | | | external_subnet | bcdb3cc0-8c0b-4d2d-813c | 10.0.0.0/24 | | | -e141bb97aa8f | | | external_ipv6_subnet | bcdb3cc0-8c0b-4d2d-813c | 2620:52:0:13b8::/64 | | | -e141bb97aa8f | | | private_subnet | 317be3d3-5265-43f7-b52b | 10.0.1.0/24 | | | -930e3fd19b8b | | +----------------------+-------------------------+---------------------+",
"openstack subnet create --ip-version 6 --subnet-range 2002:c000:200::64 --no-dhcp --gateway 2002:c000:2fe:: --dns-nameserver 2002:c000:2fe:: --network provider provider-subnet-2002:c000:200::",
"+----------------------+--------------------------------------+ | Field | Value | +----------------------+--------------------------------------+ | allocation_pools | 2002:c000:200::64-2002:c000:200::64 | | cidr | 2002:c000:200::64/128 | | created_at | 2024-09-24T19:30:07Z | | description | | | dns_nameservers | 2002:c000:2fe:: | | dns_publish_fixed_ip | None | | enable_dhcp | False | | gateway_ip | 2002:c000:200::64 | | host_routes | | | id | 49dda67d-814e-457b-b14b-77ef32935c0f | | ip_version | 6 | | ipv6_address_mode | None | | ipv6_ra_mode | None | | name | provider-subnet-2002:c000:200:: | | network_id | bcdb3cc0-8c0b-4d2d-813c-e141bb97aa8f | | prefix_length | None | | project_id | 24089d2fe1a94dd29ca2f665794fbe92 | | revision_number | 0 | | segment_id | None | | service_types | None | | subnetpool_id | None | | tags | | | updated_at | 2024-09-24T19:30:07Z | +----------------------+--------------------------------------+",
"dnf list installed python-openstackclient",
"echo USDOS_CLOUD my_cloud",
"export OS_CLOUD=my_other_cloud",
"openstack project list",
"+----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 25837c567ed5458fbb441d39862e1399 | QA | | f59f631a77264a8eb0defc898cb836af | admin | | 4e2e1951e70643b5af7ed52f3ff36539 | demo | | 8561dff8310e4cd8be4b6fd03dc8acf5 | services | +----------------------------------+----------+",
"openstack network list -c Name -c Subnets --max-width=72",
"+-------------+--------------------------------------------------------+ | Name | Subnets | +-------------+--------------------------------------------------------+ | private | 47d34cf0-0dd2-49bd-a985-67311d80c5c4, | | | 82014d36-9e60-43eb-92fc-74674573f4e8, | | | d7535565-113f-4192-baa6-da21f301f141 | | private2 | 7ee56cef-83c0-40d1-b4e7-5287dae1c23c | | public | 6745edd4-d15f-4971-89bf-70307b0ad2f1, | | | cc3f81bb-4d55-4ead-aad4-5362a7ca5b04 | | lb-mgmt-net | 5ca08724-568c-4030-93eb-f2e286570a25 | +-------------+--------------------------------------------------------+",
"openstack subnet create --ip-version 6 --ipv6-address-mode dhcpv6-stateful --project 25837c567ed5458fbb441d39862e1399 --network private2 --subnet-range fdf8:f53b:82e4::53/125 subnet_name",
"+-------------------+--------------------------------------------------------------+ | Field | Value | +-------------------+--------------------------------------------------------------+ | allocation_pools | {\"start\": \"fdf8:f53b:82e4::52\", \"end\": \"fdf8:f53b:82e4::56\"} | | cidr | fdf8:f53b:82e4::53/125 | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | fdf8:f53b:82e4::51 | | host_routes | | | id | cdfc3398-997b-46eb-9db1-ebbd88f7de05 | | ip_version | 6 | | ipv6_address_mode | dhcpv6-stateful | | ipv6_ra_mode | | | name | | | network_id | 6aff6826-4278-4a35-b74d-b0ca0cbba340 | | tenant_id | 25837c567ed5458fbb441d39862e1399 | +-------------------+--------------------------------------------------------------+",
"openstack network list -c Name -c Subnets --max-width=72",
"------------- --------------------------------------------------------+ | Name | Subnets | ------------- --------------------------------------------------------+ | private | 47d34cf0-0dd2-49bd-a985-67311d80c5c4, | | | 82014d36-9e60-43eb-92fc-74674573f4e8, | | | d7535565-113f-4192-baa6-da21f301f141 | | private2 | 7ee56cef-83c0-40d1-b4e7-5287dae1c23c, | | | cdfc3398-997b-46eb-9db1-ebbd88f7de05 | | public | 6745edd4-d15f-4971-89bf-70307b0ad2f1, | | | cc3f81bb-4d55-4ead-aad4-5362a7ca5b04 | | lb-mgmt-net | 5ca08724-568c-4030-93eb-f2e286570a25 | ------------- --------------------------------------------------------+",
"openstack server list -c Name -c Status -c Networks",
"+---------+--------+-----------------------------+ | Name | Status | Networks | +---------+--------+-----------------------------+ | server1 | ACTIVE | private2=fdf8:f53b:82e4::52 | +---------+--------+-----------------------------+",
"dnf list installed python-openstackclient",
"echo USDOS_CLOUD my_cloud",
"export OS_CLOUD=my_other_cloud",
"openstack router create <router-name> --external-gateway <network> --project <name>",
"openstack router list -c Name -c Status -c State -c Project",
"+---------+--------+-------+----------------------------------+ | Name | Status | State | Project | +---------+--------+-------+----------------------------------+ | router1 | ACTIVE | UP | 24089d2fe1a94dd29ca2f665794fbe92 | | router2 | ACTIVE | UP | ecf285621c509223ade3358691bbde59 | +---------+--------+-------+----------------------------------+",
"dnf list installed python-openstackclient",
"echo USDOS_CLOUD my_cloud",
"export OS_CLOUD=my_other_cloud",
"openstack router list",
"+---------+--------+-------+----------------------------------+ | Name | Status | State | Project | +---------+--------+-------+----------------------------------+ | router1 | ACTIVE | UP | 24089d2fe1a94dd29ca2f665794fbe92 | | router2 | ACTIVE | UP | ecf285621c509223ade3358691bbde59 | +---------+--------+-------+----------------------------------+",
"openstack subnet list -c Name -c Network -c Subnet --max-width=72",
"+----------------------+-------------------------+---------------------+ | Name | Network | Subnet | +----------------------+-------------------------+---------------------+ | private_subnet24 | 317be3d3-5265-43f7-b52b | 10.0.24.0/24 | | | -930e3fd19b8b | | | lb-mgmt-subnet | c4588d49-9151-414b-8832 | 172.24.0.0/16 | | | -37313e3b4c57 | | | external_subnet | bcdb3cc0-8c0b-4d2d-813c | 10.0.0.0/24 | | | -e141bb97aa8f | | | external_ipv6_subnet | bcdb3cc0-8c0b-4d2d-813c | 2620:52:0:13b8::/64 | | | -e141bb97aa8f | | | private_subnet | 317be3d3-5265-43f7-b52b | 10.0.1.0/24 | | | -930e3fd19b8b | | +----------------------+-------------------------+---------------------+",
"openstack router add subnet router2 private_subnet",
"openstack router show router2 --max-width=72",
"------------------------- --------------------------------------------+ | Field | Value | ------------------------- --------------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | | | created_at | 2024-09-09T06:27:48Z | | description | | | external_gateway_info | {\"network_id\": | | | \"bcdb3cc0-8c0b-4d2d-813c-e141bb97aa8f\", | | | \"external_fixed_ips\": [{\"subnet_id\": | | | \"6745edd4-d15f-4971-89bf-70307b0ad2f1\", | | | \"ip_address\": \"10.0.0.167\"}, {\"subnet_id\": | | | \"cc3f81bb-4d55-4ead-aad4-5362a7ca5b04\", | | | \"ip_address\": \"2620:52:0:13b8::1000:85\"}], | | | \"enable_snat\": true} | | flavor_id | None | | id | 9119eade-cf28-42d7-a33d-eb589469bf62 | | interfaces_info | [{\"port_id\": | | | \"5a40b083-27d0-4691-8208-99c507181a33\", | | | \"ip_address\": \"10.0.24.1\", \"subnet_id\": | | | \"47d34cf0-0dd2-49bd-a985-67311d80c5c4\"}, | | | {\"port_id\": | | | \"642e522e-2cbd-47b5-8f8b-88c1b5d5e535\", | | | \"ip_address\": \"10.1.2.1\", \"subnet_id\": | | | \"7ee56cef-83c0-40d1-b4e7-5287dae1c23c\"}, | | | {\"port_id\": | | | \"9f695259-680c-40a8-bbed-9ca84dd77c33\", | | | \"ip_address\": \"10.0.1.1\", \"subnet_id\": | | | \"d7535565-113f-4192-baa6-da21f301f141\"}, | | | {\"port_id\": | | | \"f43a55a7-2e91-4cb7-a6b2-b3b686fc2ebd\", | | | \"ip_address\": \"10.0.2.1\", \"subnet_id\": | | | \"317be3d3-5265-43f7-b52b-930e3fd19b8b\"}] | | name | router2 | | project_id | ecf285621c509223ade3358691bbde59 | | revision_number | 7 | | routes | | | status | ACTIVE | | tags | | | updated_at | 2024-09-09T09:28:40Z | ------------------------- --------------------------------------------+",
"dnf list installed python-openstackclient",
"echo USDOS_CLOUD my_cloud",
"export OS_CLOUD=my_other_cloud",
"openstack router list",
"+---------+--------+-------+----------------------------------+ | Name | Status | State | Project | +---------+--------+-------+----------------------------------+ | router1 | ACTIVE | UP | 24089d2fe1a94dd29ca2f665794fbe92 | | router2 | ACTIVE | UP | ecf285621c509223ade3358691bbde59 | +---------+--------+-------+----------------------------------+",
"openstack subnet list -c Name -c Network -c Subnet --max-width=72",
"+----------------------+-------------------------+---------------------+ | Name | Network | Subnet | +----------------------+-------------------------+---------------------+ | private_subnet24 | 317be3d3-5265-43f7-b52b | 10.0.24.0/24 | | | -930e3fd19b8b | | | lb-mgmt-subnet | c4588d49-9151-414b-8832 | 172.24.0.0/16 | | | -37313e3b4c57 | | | external_subnet | bcdb3cc0-8c0b-4d2d-813c | 10.0.0.0/24 | | | -e141bb97aa8f | | | external_ipv6_subnet | bcdb3cc0-8c0b-4d2d-813c | 2620:52:0:13b8::/64 | | | -e141bb97aa8f | | | private_subnet | 317be3d3-5265-43f7-b52b | 10.0.1.0/24 | | | -930e3fd19b8b | | +----------------------+-------------------------+---------------------+",
"openstack router remove subnet router2 private_subnet",
"openstack router show router2 --max-width=72",
"------------------------- --------------------------------------------+ | Field | Value | ------------------------- --------------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | | | created_at | 2024-09-09T06:27:48Z | | description | | | external_gateway_info | {\"network_id\": | | | \"bcdb3cc0-8c0b-4d2d-813c-e141bb97aa8f\", | | | \"external_fixed_ips\": [{\"subnet_id\": | | | \"6745edd4-d15f-4971-89bf-70307b0ad2f1\", | | | \"ip_address\": \"10.0.0.167\"}, {\"subnet_id\": | | | \"cc3f81bb-4d55-4ead-aad4-5362a7ca5b04\", | | | \"ip_address\": \"2620:52:0:13b8::1000:85\"}], | | | \"enable_snat\": true} | | flavor_id | None | | id | 9119eade-cf28-42d7-a33d-eb589469bf62 | | interfaces_info | [{\"port_id\": | | | \"5a40b083-27d0-4691-8208-99c507181a33\", | | | \"ip_address\": \"10.0.24.1\", \"subnet_id\": | | | \"47d34cf0-0dd2-49bd-a985-67311d80c5c4\"}, | | | {\"port_id\": | | | \"642e522e-2cbd-47b5-8f8b-88c1b5d5e535\", | | | \"ip_address\": \"10.1.2.1\", \"subnet_id\": | | | \"7ee56cef-83c0-40d1-b4e7-5287dae1c23c\"}, | | | {\"port_id\": | | | \"9f695259-680c-40a8-bbed-9ca84dd77c33\", | | | \"ip_address\": \"10.0.1.1\", \"subnet_id\": | | | \"d7535565-113f-4192-baa6-da21f301f141\"}, | | name | router2 | | project_id | ecf285621c509223ade3358691bbde59 | | revision_number | 7 | | routes | | | status | ACTIVE | | tags | | | updated_at | 2024-09-09T11:14:33Z | ------------------------- --------------------------------------------+",
"dnf list installed python-openstackclient",
"echo USDOS_CLOUD my_cloud",
"export OS_CLOUD=my_other_cloud",
"openstack project purge --project test-project",
"openstack project list",
"+----------------------------------+--------------+ | ID | Name | +----------------------------------+--------------+ | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+--------------+",
"dnf list installed python-openstackclient",
"echo USDOS_CLOUD my_cloud",
"export OS_CLOUD=my_other_cloud",
"openstack router list -c Name -c Status -c State -c Project",
"+---------+--------+-------+----------------------------------+ | Name | Status | State | Project | +---------+--------+-------+----------------------------------+ | router1 | ACTIVE | UP | 24089d2fe1a94dd29ca2f665794fbe92 | | router2 | ACTIVE | UP | ecf285621c509223ade3358691bbde59 | +---------+--------+-------+----------------------------------+",
"openstack router delete router2",
"openstack router list -c Name -c Status -c State -c Project",
"+---------+--------+-------+----------------------------------+ | Name | Status | State | Project | +---------+--------+-------+----------------------------------+ | router1 | ACTIVE | UP | 24089d2fe1a94dd29ca2f665794fbe92 | +---------+--------+-------+----------------------------------+",
"dnf list installed python-openstackclient",
"echo USDOS_CLOUD my_cloud",
"export OS_CLOUD=my_other_cloud",
"openstack subnet list",
"+----------------------+-------------------------+---------------------+ | Name | Network | Subnet | +----------------------+-------------------------+---------------------+ | private_subnet24 | 317be3d3-5265-43f7-b52b | 10.0.24.0/24 | | | -930e3fd19b8b | | | lb-mgmt-subnet | c4588d49-9151-414b-8832 | 172.24.0.0/16 | | | -37313e3b4c57 | | | external_subnet | bcdb3cc0-8c0b-4d2d-813c | 10.0.0.0/24 | | | -e141bb97aa8f | | | private2_subnet | 56e73380-a771-408f-bdc0 | 10.1.2.0/24 | | | -1e97f79677c6 | | | private_subnet2 | 317be3d3-5265-43f7-b52b | 10.0.2.0/24 | | | -930e3fd19b8b | | | external_ipv6_subnet | bcdb3cc0-8c0b-4d2d-813c | 2620:52:0:13b8::/64 | | | -e141bb97aa8f | | | private_subnet | 317be3d3-5265-43f7-b52b | 10.0.1.0/24 | | | -930e3fd19b8b | | +----------------------+-------------------------+---------------------+",
"openstack subnet delete private_subnet24",
"openstack subnet list -c Name -c Network -c Subnet --max-width=72",
"+----------------------+-------------------------+---------------------+ | Name | Network | Subnet | +----------------------+-------------------------+---------------------+ | lb-mgmt-subnet | c4588d49-9151-414b-8832 | 172.24.0.0/16 | | | -37313e3b4c57 | | | external_subnet | bcdb3cc0-8c0b-4d2d-813c | 10.0.0.0/24 | | | -e141bb97aa8f | | | private2_subnet | 56e73380-a771-408f-bdc0 | 10.1.2.0/24 | | | -1e97f79677c6 | | | private_subnet2 | 317be3d3-5265-43f7-b52b | 10.0.2.0/24 | | | -930e3fd19b8b | | | external_ipv6_subnet | bcdb3cc0-8c0b-4d2d-813c | 2620:52:0:13b8::/64 | | | -e141bb97aa8f | | | private_subnet | 317be3d3-5265-43f7-b52b | 10.0.1.0/24 | | | -930e3fd19b8b | | +----------------------+-------------------------+---------------------+",
"dnf list installed python-openstackclient",
"echo USDOS_CLOUD my_cloud",
"export OS_CLOUD=my_other_cloud",
"openstack network list -c Name",
"+-------------+ | Name | +-------------+ | my_network | | private | | public | | lb-mgmt-net | +-------------+",
"openstack network delete my_network",
"openstack network list -c Name",
"+-------------+ | Name | +-------------+ | private | | public | | lb-mgmt-net | +-------------+"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/managing_networking_resources/manage-proj-network_rhoso-mngnet |
Chapter 5. Scale [autoscaling/v1] | Chapter 5. Scale [autoscaling/v1] Description Scale represents a scaling request for a resource. Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata . spec object ScaleSpec describes the attributes of a scale subresource. status object ScaleStatus represents the current status of a scale subresource. 5.1.1. .spec Description ScaleSpec describes the attributes of a scale subresource. Type object Property Type Description replicas integer desired number of instances for the scaled object. 5.1.2. .status Description ScaleStatus represents the current status of a scale subresource. Type object Required replicas Property Type Description replicas integer actual number of observed instances of the scaled object. selector string label query over pods that should match the replicas count. This is same as the label selector but in the string format to avoid introspection by clients. The string will be in the same format as the query-param syntax. More info about label selectors: http://kubernetes.io/docs/user-guide/labels#label-selectors 5.2. API endpoints The following API endpoints are available: /apis/apps/v1/namespaces/{namespace}/deployments/{name}/scale GET : read scale of the specified Deployment PATCH : partially update scale of the specified Deployment PUT : replace scale of the specified Deployment /apis/apps/v1/namespaces/{namespace}/replicasets/{name}/scale GET : read scale of the specified ReplicaSet PATCH : partially update scale of the specified ReplicaSet PUT : replace scale of the specified ReplicaSet /apis/apps/v1/namespaces/{namespace}/statefulsets/{name}/scale GET : read scale of the specified StatefulSet PATCH : partially update scale of the specified StatefulSet PUT : replace scale of the specified StatefulSet /api/v1/namespaces/{namespace}/replicationcontrollers/{name}/scale GET : read scale of the specified ReplicationController PATCH : partially update scale of the specified ReplicationController PUT : replace scale of the specified ReplicationController 5.2.1. /apis/apps/v1/namespaces/{namespace}/deployments/{name}/scale Table 5.1. Global path parameters Parameter Type Description name string name of the Scale namespace string object name and auth scope, such as for teams and projects Table 5.2. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read scale of the specified Deployment Table 5.3. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified Deployment Table 5.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.5. Body parameters Parameter Type Description body Patch schema Table 5.6. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified Deployment Table 5.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.8. Body parameters Parameter Type Description body Scale schema Table 5.9. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 5.2.2. /apis/apps/v1/namespaces/{namespace}/replicasets/{name}/scale Table 5.10. Global path parameters Parameter Type Description name string name of the Scale namespace string object name and auth scope, such as for teams and projects Table 5.11. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read scale of the specified ReplicaSet Table 5.12. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified ReplicaSet Table 5.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.14. Body parameters Parameter Type Description body Patch schema Table 5.15. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified ReplicaSet Table 5.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.17. Body parameters Parameter Type Description body Scale schema Table 5.18. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 5.2.3. /apis/apps/v1/namespaces/{namespace}/statefulsets/{name}/scale Table 5.19. Global path parameters Parameter Type Description name string name of the Scale namespace string object name and auth scope, such as for teams and projects Table 5.20. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read scale of the specified StatefulSet Table 5.21. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified StatefulSet Table 5.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.23. Body parameters Parameter Type Description body Patch schema Table 5.24. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified StatefulSet Table 5.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.26. Body parameters Parameter Type Description body Scale schema Table 5.27. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 5.2.4. /api/v1/namespaces/{namespace}/replicationcontrollers/{name}/scale Table 5.28. Global path parameters Parameter Type Description name string name of the Scale namespace string object name and auth scope, such as for teams and projects Table 5.29. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read scale of the specified ReplicationController Table 5.30. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified ReplicationController Table 5.31. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.32. Body parameters Parameter Type Description body Patch schema Table 5.33. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified ReplicationController Table 5.34. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.35. Body parameters Parameter Type Description body Scale schema Table 5.36. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/autoscale_apis/scale-autoscaling-v1 |
Chapter 10. Known issues | Chapter 10. Known issues This part describes known issues in Red Hat Enterprise Linux 8.6. 10.1. Installer and image creation Installation fails on IBM Power 10 systems with LPAR and secure boot enabled RHEL installer is not integrated with static key secure boot on IBM Power 10 systems. Consequently, when logical partition (LPAR) is enabled with the secure boot option, the installation fails with the error, Unable to proceed with RHEL-x.x Installation . To work around this problem, install RHEL without enabling secure boot. After booting the system: Copy the signed Kernel into the PReP partition using the dd command. Restart the system and enable secure boot. Once the firmware verifies the bootloader and the kernel, the system boots up successfully. For more information, see https://www.ibm.com/support/pages/node/6528884 (BZ#2025814) Unexpected SELinux policies on systems where Anaconda is running as an application When Anaconda is running as an application on an already installed system (for example to perform another installation to an image file using the -image anaconda option), the system is not prohibited to modify the SELinux types and attributes during installation. As a consequence, certain elements of SELinux policy might change on the system where Anaconda is running. To work around this problem, do not run Anaconda on the production system and execute it in a temporary virtual machine. So that the SELinux policy on a production system is not modified. Running anaconda as part of the system installation process such as installing from boot.iso or dvd.iso is not affected by this issue. ( BZ#2050140 ) The auth and authconfig Kickstart commands require the AppStream repository The authselect-compat package is required by the auth and authconfig Kickstart commands during installation. Without this package, the installation fails if auth or authconfig are used. However, by design, the authselect-compat package is only available in the AppStream repository. To work around this problem, verify that the BaseOS and AppStream repositories are available to the installer or use the authselect Kickstart command during installation. (BZ#1640697) The reboot --kexec and inst.kexec commands do not provide a predictable system state Performing a RHEL installation with the reboot --kexec Kickstart command or the inst.kexec kernel boot parameters do not provide the same predictable system state as a full reboot. As a consequence, switching to the installed system without rebooting can produce unpredictable results. Note that the kexec feature is deprecated and will be removed in a future release of Red Hat Enterprise Linux. (BZ#1697896) The USB CD-ROM drive is not available as an installation source in Anaconda Installation fails when the USB CD-ROM drive is the source for it and the Kickstart ignoredisk --only-use= command is specified. In this case, Anaconda cannot find and use this source disk. To work around this problem, use the harddrive --partition=sdX --dir=/ command to install from USB CD-ROM drive. As a result, the installation does not fail. ( BZ#1914955 ) Network access is not enabled by default in the installation program Several installation features require network access, for example, registration of a system using the Content Delivery Network (CDN), NTP server support, and network installation sources. However, network access is not enabled by default, and as a result, these features cannot be used until network access is enabled. To work around this problem, add ip=dhcp to boot options to enable network access when the installation starts. Optionally, passing a Kickstart file or a repository located on the network using boot options also resolves the problem. As a result, the network-based installation features can be used. (BZ#1757877) Hard drive partitioned installations with iso9660 filesystem fails You cannot install RHEL on systems where the hard drive is partitioned with the iso9660 filesystem. This is due to the updated installation code that is set to ignore any hard disk containing a iso9660 file system partition. This happens even when RHEL is installed without using a DVD. To workaround this problem, add the following script in the kickstart file to format the disc before the installation starts. Note: Before performing the workaround, backup the data available on the disk. The wipefs command formats all the existing data from the disk. As a result, installations work as expected without any errors. ( BZ#1929105 ) IBM Power systems with HASH MMU mode fail to boot with memory allocation failures IBM Power Systems with HASH memory allocation unit (MMU) mode support kdump up to a maximum of 192 cores. Consequently, the system fails to boot with memory allocation failures if kdump is enabled on more than 192 cores. This limitation is due to RMA memory allocations during early boot in HASH MMU mode. To work around this problem, use the Radix MMU mode with fadump enabled instead of using kdump . (BZ#2028361) 10.2. Subscription management syspurpose addons have no effect on the subscription-manager attach --auto output. In Red Hat Enterprise Linux 8, four attributes of the syspurpose command-line tool have been added: role , usage , service_level_agreement and addons . Currently, only role , usage and service_level_agreement affect the output of running the subscription-manager attach --auto command. Users who attempt to set values to the addons argument will not observe any effect on the subscriptions that are auto-attached. ( BZ#1687900 ) 10.3. Software management cr_compress_file_with_stat() can cause a memory leak The createrepo_c C library has the API cr_compress_file_with_stat() function. This function is declared with char **dst as a second parameter. Depending on its other parameters, cr_compress_file_with_stat() either uses dst as an input parameter, or uses it to return an allocated string. This unpredictable behavior can cause a memory leak, because it does not inform the user when to free dst contents. To work around this problem, a new API cr_compress_file_with_stat_v2 function has been added, which uses the dst parameter only as an input. It is declared as char *dst . This prevents memory leak. Note that the cr_compress_file_with_stat_v2 function is temporary and will be present only in RHEL 8. Later, cr_compress_file_with_stat() will be fixed instead. (BZ#1973588) YUM transactions reported as successful when a scriptlet fails Since RPM version 4.6, post-install scriptlets are allowed to fail without being fatal to the transaction. This behavior propagates up to YUM as well. This results in scriptlets which might occasionally fail while the overall package transaction reports as successful. There is no workaround available at the moment. Note that this is expected behavior that remains consistent between RPM and YUM. Any issues in scriptlets should be addressed at the package level. ( BZ#1986657 ) A security DNF upgrade can skip obsoleted packages The patch for BZ#2095764 , released with the RHBA-2022:5816 advisory, introduced the following regression: The DNF upgrade using security filters, such as the --security option, can skip upgrading obsoleted packages. This issue happens specifically when an installed package is obsoleted by a different available package, and an advisory exists for the available package. Consequently, dnf leaves the obsoleted package in the system, and the security upgrade is not fully performed, potentially leaving the system in a vulnerable state. To work around this problem, perform the full upgrade without security filters, or, first, verify that there are no obsoleted packages involved in the upgrade process. ( BZ#2095764 ) 10.4. Shells and command-line tools coreutils might report misleading EPERM error codes GNU Core Utilities ( coreutils ) started using the statx() system call. If a seccomp filter returns an EPERM error code for unknown system calls, coreutils might consequently report misleading EPERM error codes because EPERM can not be distinguished from the actual Operation not permitted error returned by a working statx() syscall. To work around this problem, update the seccomp filter to either permit the statx() syscall, or to return an ENOSYS error code for syscalls it does not know. ( BZ#2030661 ) 10.5. Infrastructure services Postfix TLS fingerprint algorithm in the FIPS mode needs to be changed to SHA-256 By default in RHEL 8, postfix uses MD5 fingerprints with the TLS for backward compatibility. But in the FIPS mode, the MD5 hashing function is not available, which may cause TLS to incorrectly function in the default postfix configuration. To workaround this problem, the hashing function needs to be changed to SHA-256 in the postfix configuration file. For more details, see the related Knowledgebase article Fix postfix TLS in the FIPS mode by switching to SHA-256 instead of MD5 . ( BZ#1711885 ) The brltty package is not multilib compatible It is not possible to have both 32-bit and 64-bit versions of the brltty package installed. You can either install the 32-bit ( brltty.i686 ) or the 64-bit ( brltty.x86_64 ) version of the package. The 64-bit version is recommended. ( BZ#2008197 ) 10.6. Security File permissions of /etc/passwd- are not aligned with the CIS RHEL 8 Benchmark 1.0.0 Because of an issue with the CIS Benchmark, the remediation of the SCAP rule that ensures permissions on the /etc/passwd- backup file configures permissions to 0644 . However, the CIS Red Hat Enterprise Linux 8 Benchmark 1.0.0 requires file permissions 0600 for that file. As a consequence, the file permissions of /etc/passwd- are not aligned with the benchmark after remediation. ( BZ#1858866 ) libselinux-python is available only through its module The libselinux-python package contains only Python 2 bindings for developing SELinux applications and it is used for backward compatibility. For this reason, libselinux-python is no longer available in the default RHEL 8 repositories through the yum install libselinux-python command. To work around this problem, enable both the libselinux-python and python27 modules, and install the libselinux-python package and its dependencies with the following commands: Alternatively, install libselinux-python using its install profile with a single command: As a result, you can install libselinux-python using the respective module. (BZ#1666328) udica processes UBI 8 containers only when started with --env container=podman The Red Hat Universal Base Image 8 (UBI 8) containers set the container environment variable to the oci value instead of the podman value. This prevents the udica tool from analyzing a container JavaScript Object Notation (JSON) file. To work around this problem, start a UBI 8 container using a podman command with the --env container=podman parameter. As a result, udica can generate an SELinux policy for a UBI 8 container only when you use the described workaround. ( BZ#1763210 ) SELINUX=disabled in /etc/selinux/config does not work properly Disabling SELinux using the SELINUX=disabled option in the /etc/selinux/config results in a process in which the kernel boots with SELinux enabled and switches to disabled mode later in the boot process. This might cause memory leaks. To work around this problem, disable SELinux by adding the selinux=0 parameter to the kernel command line as described in the Changing SELinux modes at boot time section of the Using SELinux title if your scenario really requires to completely disable SELinux. (JIRA:RHELPLAN-34199) sshd -T provides inaccurate information about Ciphers, MACs and KeX algorithms The output of the sshd -T command does not contain the system-wide crypto policy configuration or other options that could come from an environment file in /etc/sysconfig/sshd and that are applied as arguments on the sshd command. This occurs because the upstream OpenSSH project did not support the Include directive to support Red-Hat-provided cryptographic defaults in RHEL 8. Crypto policies are applied as command-line arguments to the sshd executable in the sshd.service unit during the service's start by using an EnvironmentFile . To work around the problem, use the source command with the environment file and pass the crypto policy as an argument to the sshd command, as in sshd -T USDCRYPTO_POLICY . For additional information, see Ciphers, MACs or KeX algorithms differ from sshd -T to what is provided by current crypto policy level . As a result, the output from sshd -T matches the currently configured crypto policy. (BZ#2044354) OpenSSL in FIPS mode accepts only specific D-H parameters In FIPS mode, TLS clients that use OpenSSL return a bad dh value error and abort TLS connections to servers that use manually generated parameters. This is because OpenSSL, when configured to work in compliance with FIPS 140-2, works only with Diffie-Hellman parameters compliant to NIST SP 800-56A rev3 Appendix D (groups 14, 15, 16, 17, and 18 defined in RFC 3526 and with groups defined in RFC 7919). Also, servers that use OpenSSL ignore all other parameters and instead select known parameters of similar size. To work around this problem, use only the compliant groups. (BZ#1810911) crypto-policies incorrectly allow Camellia ciphers The RHEL 8 system-wide cryptographic policies should disable Camellia ciphers in all policy levels, as stated in the product documentation. However, the Kerberos protocol enables the ciphers by default. To work around the problem, apply the NO-CAMELLIA subpolicy: In the command, replace DEFAULT with the cryptographic level name if you have switched from DEFAULT previously. As a result, Camellia ciphers are correctly disallowed across all applications that use system-wide crypto policies only when you disable them through the workaround. ( BZ#1919155 ) Smart-card provisioning process through OpenSC pkcs15-init does not work properly The file_caching option is enabled in the default OpenSC configuration, and the file caching functionality does not handle some commands from the pkcs15-init tool properly. Consequently, the smart-card provisioning process through OpenSC fails. To work around the problem, add the following snippet to the /etc/opensc.conf file: The smart-card provisioning through pkcs15-init only works if you apply the previously described workaround. ( BZ#1947025 ) Connections to servers with SHA-1 signatures do not work with GnuTLS SHA-1 signatures in certificates are rejected by the GnuTLS secure communications library as insecure. Consequently, applications that use GnuTLS as a TLS backend cannot establish a TLS connection to peers that offer such certificates. This behavior is inconsistent with other system cryptographic libraries. To work around this problem, upgrade the server to use certificates signed with SHA-256 or stronger hash, or switch to the LEGACY policy. (BZ#1628553) IKE over TCP connections do not work on custom TCP ports The tcp-remoteport Libreswan configuration option does not work properly. Consequently, an IKE over TCP connection cannot be established when a scenario requires specifying a non-default TCP port. ( BZ#1989050 ) Remediating service-related rules during kickstart installations might fail During a kickstart installation, the OpenSCAP utility sometimes incorrectly shows that a service enable or disable state remediation is not needed. Consequently, OpenSCAP might set the services on the installed system to a non-compliant state. As a workaround, you can scan and remediate the system after the kickstart installation. This will fix the service-related issues. ( BZ#1834716 ) RHV hypervisor may not work correctly when hardening the system during installation When installing Red Hat Virtualization Hypervisor (RHV-H) and applying the Red Hat Enterprise Linux 8 STIG profile, OSCAP Anaconda Add-on may harden the system as RHEL instead of RVH-H and remove essential packages for RHV-H. Consequently, the RHV hypervisor may not work. To work around the problem, install the RHV-H system without applying any profile hardening, and after the installation is complete, apply the profile by using OpenSCAP. As a result, the RHV hypervisor works correctly. ( BZ#2075508 ) Red Hat provides the CVE OVAL reports in compressed format Red Hat provides CVE OVAL feeds in the bzip2-compressed format, and they are no longer available in the XML file format. The location of feeds for RHEL 8 has been updated accordingly to reflect this change. Because referencing compressed content is not standardized, third-party SCAP scanners can have problems with scanning rules that use the feed. ( BZ#2028428 ) Certain sets of interdependent rules in SSG can fail Remediation of SCAP Security Guide (SSG) rules in a benchmark can fail due to undefined ordering of rules and their dependencies. If two or more rules need to be executed in a particular order, for example, when one rule installs a component and another rule configures the same component, they can run in the wrong order and remediation reports an error. To work around this problem, run the remediation twice, and the second run fixes the dependent rules. ( BZ#1750755 ) Server with GUI and Workstation installations are not possible with CIS Server profiles The CIS Server Level 1 and Level 2 security profiles are not compatible with the Server with GUI and Workstation software selections. As a consequence, a RHEL 8 installation with the Server with GUI software selection and CIS Server profiles is not possible. An attempted installation using the CIS Server Level 1 or Level 2 profiles and either of these software selections will generate the error message: If you need to align systems with the Server with GUI or Workstation software selections according to CIS benchmarks, use the CIS Workstation Level 1 or Level 2 profiles instead. ( BZ#1843932 ) Kickstart uses org_fedora_oscap instead of com_redhat_oscap in RHEL 8 The Kickstart references the Open Security Content Automation Protocol (OSCAP) Anaconda add-on as org_fedora_oscap instead of com_redhat_oscap , which might cause confusion. This is necessary for backwards compatibility backward compatibility with Red Hat Enterprise Linux 7. (BZ#1665082) SSH timeout rules in STIG profiles configure incorrect options An update of OpenSSH affected the rules in the following Defense Information Systems Agency Security Technical Implementation Guide (DISA STIG) profiles: DISA STIG for RHEL 8 ( xccdf_org.ssgproject.content_profile_stig ) DISA STIG with GUI for RHEL 8 ( xccdf_org.ssgproject.content_profile_stig_gui ) In each of these profiles, the following two rules are affected: When applied to SSH servers, each of these rules configures an option ( ClientAliveCountMax and ClientAliveInterval ) that no longer behaves as previously. As a consequence, OpenSSH no longer disconnects idle SSH users when it reaches the timeout configured by these rules. As a workaround, these rules have been temporarily removed from the DISA STIG for RHEL 8 and DISA STIG with GUI for RHEL 8 profiles until a solution is developed. ( BZ#2038977 ) Certain rsyslog priority strings do not work correctly Support for the GnuTLS priority string for imtcp that allows fine-grained control over encryption is not complete. Consequently, the following priority strings do not work properly in rsyslog : To work around this problem, use only correctly working priority strings: As a result, current configurations must be limited to the strings that work correctly. ( BZ#1679512 ) Negative effects of the default logging setup on performance The default logging environment setup might consume 4 GB of memory or even more and adjustments of rate-limit values are complex when systemd-journald is running with rsyslog . See the Negative effects of the RHEL default logging setup on performance and their mitigations Knowledgebase article for more information. (JIRA:RHELPLAN-10431) Ansible remediations require additional collections With the replacement of Ansible Engine by the ansible-core package, the list of Ansible modules provided with the RHEL subscription is reduced. As a consequence, running remediations that use Ansible content included within the scap-security-guide package requires collections from the rhc-worker-playbook package. For an Ansible remediation, perform the following steps: Install the required packages: Navigate to the /usr/share/scap-security-guide/ansible directory: Run the relevant Ansible playbook using environment variables that define the path to the additional Ansible collections: # ANSIBLE_COLLECTIONS_PATH=/usr/share/rhc-worker-playbook/ansible/collections/ansible_collections/ ansible-playbook -c local -i localhost, rhel9-playbook- cis_server_l1 .yml Replace cis_server_l1 with the ID of the profile against which you want to remediate the system. As a result, the Ansible content is processed correctly. Note Support of the collections provided in rhc-worker-playbook is limited to enabling the Ansible content sourced in scap-security-guide . (BZ#2114981) 10.7. Networking The nm-cloud-setup service removes manually-configured secondary IP addresses from interfaces Based on the information received from the cloud environment, the nm-cloud-setup service configures network interfaces. Disable nm-cloud-setup to manually configure interfaces. However, in certain cases, other services on the host can configure interfaces as well. For example, these services could add secondary IP addresses. To avoid that nm-cloud-setup removes secondary IP addresses: Stop and disable the nm-cloud-setup service and timer: Display the available connection profiles: Reactive the affected connection profiles: As a result, the service no longer removes manually-configured secondary IP addresses from interfaces. ( BZ#2132754 ) The primary IP address of an instance changes after starting the nm-cloud-setup service in Alibaba Cloud After launching an instance in the Alibaba Cloud, the nm-cloud-setup service assigns the primary IP address to an instance. However, if you assign multiple secondary IP addresses to an instance and start the nm-cloud-setup service, the former primary IP address gets replaced by one of the already assigned secondary IP addresses. The returned list of metadata verifies the same. To work around the problem, configure secondary IP addresses manually to avoid that the primary IP address changes. As a result, an instance retains both IP addresses and the primary IP address does not change. ( BZ#2079849 ) NetworkManager does not support activating bond and team ports in a specific order NetworkManager activates interfaces alphabetically by interface names. However, if an interface appears later during the boot, for example, because the kernel needs more time to discover it, NetworkManager activates this interface later. NetworkManager does not support setting a priority on bond and team ports. Consequently, the order in which NetworkManager activates ports of these devices is not always predictable. To work around this problem, write a dispatcher script. For an example of such a script, see the corresponding comment in the ticket. ( BZ#1920398 ) Systems with the IPv6_rpfilter option enabled experience low network throughput Systems with the IPv6_rpfilter option enabled in the firewalld.conf file currently experience suboptimal performance and low network throughput in high traffic scenarios, such as 100-Gbps links. To work around the problem, disable the IPv6_rpfilter option. To do so, add the following line in the /etc/firewalld/firewalld.conf file. As a result, the system performs better, but also has reduced security. ( BZ#1871860 ) RoCE interfaces lose their IP settings due to an unexpected change of the network interface name The RDMA over Converged Ethernet (RoCE) interfaces lose their IP settings due to an unexpected change of the network interface name if both conditions are met: User upgrades from a RHEL 8.6 system or earlier. The RoCE card is enumerated by UID. To workaround this problem: Create the /etc/systemd/network/98-rhel87-s390x.link file with the following content: Reboot the system for the changes to take effect. Upgrade to RHEL 8.7 or newer. Note that RoCE interfaces that are enumerated by function ID (FID) and are non-unique, will still use unpredictable interface names unless you set the net.naming-scheme=rhel-8.7 kernel parameter. In this case, the RoCE interfaces will switch to predictable names with the "ens" prefix. ( BZ#2169382 ) 10.8. Kernel Using net_prio or net_cls controllers in v1 mode deactivates some controllers of the cgroup-v2 hierarchy In cgroup-v2 environments, using either net_prio or net_cls controllers in v1 mode disables the hierarchical tracking of socket data. As a result, the cgroup-v2 hierarchy for socket data tracking controllers is not active, and the dmesg command reports the following message: (BZ#2046396) Anaconda in some cases fails after entering the passphrase for encrypted devices If kdump is disabled when preparing an installation and the user selects encrypted disk partitioning, the Anaconda installer fails with a traceback after entering the passphrase for the encrypted device. To work around this problem, do one of the following: Create the encrypted disk partitioning before disabling kdump . Keep kdump enabled during the installation and disable it after the installation process is complete. ( BZ#2086100 ) Reloading an identical crash extension may cause segmentation faults When you load a copy of an already loaded crash extension file, it might trigger a segmentation fault. Currently, the crash utility detects if an original file has been loaded. Consequently, due to two identical files co-existing in the crash utility, a namespace collision occurs, which triggers the crash utility to cause a segmentation fault. You can work around the problem by loading the crash extension file only once. As a result, segmentation faults no longer occur in the described scenario. ( BZ#1906482 ) vmcore capture fails after memory hot-plug or unplug operation After performing the memory hot-plug or hot-unplug operation, the event comes after updating the device tree which contains memory layout information. Thereby the makedumpfile utility tries to access a non-existent physical address. The problem appears if all of the following conditions meet: A little-endian variant of IBM Power System runs RHEL 8. The kdump or fadump service is enabled on the system. Consequently, the capture kernel fails to save vmcore if a kernel crash is triggered after the memory hot-plug or hot-unplug operation. To work around this problem, restart the kdump service after hot-plug or hot-unplug: As a result, vmcore is successfully saved in the described scenario. (BZ#1793389) Debug kernel fails to boot in crash capture environment on RHEL 8 Due to the memory-intensive nature of the debug kernel, a problem occurs when the debug kernel is in use and a kernel panic is triggered. As a consequence, the debug kernel is not able to boot as the capture kernel and a stack trace is generated instead. To work around this problem, increase the crash kernel memory as required. As a result, the debug kernel boots successfully in the crash capture environment. (BZ#1659609) Allocating crash kernel memory fails at boot time On some Ampere Altra systems, allocating the crash kernel memory during boot fails when the 32-bit region is disabled in BIOS settings. Consequently, the kdump service fails to start. This is caused by memory fragmentation in the region below 4 GB with no fragment being large enough to contain the crash kernel memory. To work around this problem, enable the 32-bit memory region in BIOS as follows: Open the BIOS settings on your system. Open the Chipset menu. Under Memory Configuration , enable the Slave 32-bit option. As a result, crash kernel memory allocation within the 32-bit region succeeds and the kdump service works as expected. (BZ#1940674) The kernel ACPI driver reports it has no access to a PCIe ECAM memory region The Advanced Configuration and Power Interface (ACPI) table provided by firmware does not define a memory region on the PCI bus in the Current Resource Settings (_CRS) method for the PCI bus device. Consequently, the following warning message occurs during the system boot: However, the kernel is still able to access the 0x30000000-0x31ffffff memory region, and can assign that memory region to the PCI Enhanced Configuration Access Mechanism (ECAM) properly. You can verify that PCI ECAM works correctly by accessing the PCIe configuration space over the 256 byte offset with the following output: As a result, you can ignore the warning message. For more information about the problem, see the "Firmware Bug: ECAM area mem 0x30000000-0x31ffffff not reserved in ACPI namespace" appears during system boot solution. (BZ#1868526) The tuned-adm profile powersave command causes the system to become unresponsive Executing the tuned-adm profile powersave command leads to an unresponsive state of the Penguin Valkyrie 2000 2-socket systems with the older Thunderx (CN88xx) processors. Consequently, reboot the system to resume working. To work around this problem, avoid using the powersave profile if your system matches the mentioned specifications. (BZ#1609288) The HP NMI watchdog does not always generate a crash dump In certain cases, the hpwdt driver for the HP NMI watchdog is not able to claim a non-maskable interrupt (NMI) generated by the HPE watchdog timer because the NMI was instead consumed by the perfmon driver. The missing NMI is initiated by one of two conditions: The Generate NMI button on the Integrated Lights-Out (iLO) server management software. This button is triggered by a user. The hpwdt watchdog. The expiration by default sends an NMI to the server. Both sequences typically occur when the system is unresponsive. Under normal circumstances, the NMI handler for both these situations calls the kernel panic() function and if configured, the kdump service generates a vmcore file. Because of the missing NMI, however, kernel panic() is not called and vmcore is not collected. In the first case (1.), if the system was unresponsive, it remains so. To work around this scenario, use the virtual Power button to reset or power cycle the server. In the second case (2.), the missing NMI is followed 9 seconds later by a reset from the Automated System Recovery (ASR). The HPE Gen9 Server line experiences this problem in single-digit percentages. The Gen10 at an even smaller frequency. (BZ#1602962) Using irqpoll causes vmcore generation failure Due to an existing problem with the nvme driver on the 64-bit ARM architecture that run on the Amazon Web Services Graviton 1 processor, causes vmcore generation to fail when you provide the irqpoll kernel command line parameter to the first kernel. Consequently, no vmcore file is dumped in the /var/crash/ directory upon a kernel crash. To work around this problem: Append irqpoll to KDUMP_COMMANDLINE_REMOVE variable in the /etc/sysconfig/kdump file. Remove irqpoll from KDUMP_COMMANDLINE_APPEND variable in the /etc/sysconfig/kdump file. Restart the kdump service: As a result, the first kernel boots correctly and the vmcore file is expected to be captured upon the kernel crash. Note that the Amazon Web Services Graviton 2 and Amazon Web Services Graviton 3 processors do not require you to manually remove the irqpoll parameter in the /etc/sysconfig/kdump file. The kdump service can use a significant amount of crash kernel memory to dump the vmcore file. Ensure that the capture kernel has sufficient memory available for the kdump service. For related information on this Known Issue, see the The irqpoll kernel command line parameter might cause vmcore generation failure article. (BZ#1654962) Connections fail when attaching a virtual function to virtual machine Pensando network cards that use the ionic device driver silently accept VLAN tag configuration requests and attempt configuring network connections while attaching network virtual functions ( VF ) to a virtual machine ( VM ). Such network connections fail as this feature is not yet supported by the card's firmware. (BZ#1930576) The OPEN MPI library may trigger run-time failures with default PML In OPEN Message Passing Interface (OPEN MPI) implementation 4.0.x series, Unified Communication X (UCX) is the default point-to-point communicator (PML). The later versions of OPEN MPI 4.0.x series deprecated openib Byte Transfer Layer (BTL). However, OPEN MPI, when run over a homogeneous cluster (same hardware and software configuration), UCX still uses openib BTL for MPI one-sided operations. As a consequence, this may trigger execution errors. To work around this problem: Run the mpirun command using following parameters: where, The -mca btl openib parameter disables openib BTL The -mca pml ucx parameter configures OPEN MPI to use ucx PML. The x UCX_NET_DEVICES= parameter restricts UCX to use the specified devices The OPEN MPI, when run over a heterogeneous cluster (different hardware and software configuration), it uses UCX as the default PML. As a consequence, this may cause the OPEN MPI jobs to run with erratic performance, unresponsive behavior, or crash failures. To work around this problem, set the UCX priority as: Run the mpirun command using following parameters: As a result, the OPEN MPI library is able to choose an alternative available transport layer over UCX. (BZ#1866402) The Solarflare fails to create maximum number of virtual functions (VFs) The Solarflare NICs fail to create a maximum number of VFs due to insufficient resources. You can check the maximum number of VFs that a PCIe device can create in the /sys/bus/pci/devices/PCI_ID/sriov_totalvfs file. To workaround this problem, you can either adjust the number of VFs or the VF MSI interrupt value to a lower value, either from Solarflare Boot Manager on startup, or using Solarflare sfboot utility. The default VF MSI interrupt value is 8 . To adjust the VF MSI interrupt value using sfboot : Note Adjusting VF MSI interrupt value affects the VF performance. For more information about parameters to be adjusted accordingly, see the Solarflare Server Adapter user guide . (BZ#1971506) Memory allocation for kdump fails on the 64-bit ARM architectures On certain 64-bit ARM based systems, the firmware uses the non-contiguous memory allocation method, which reserves memory randomly at different scattered locations. Consequently, due to the unavailability of consecutive blocks of memory, the crash kernel cannot reserve memory space for the kdump mechanism. To work around this problem, install the kernel version provided by RHEL 8.8 and later. The latest version of RHEL supports the fallback dump capture mechanism that helps to find a suitable memory region in the described scenario. ( BZ#2214235 ) Hardware certification of the real-time kernel on systems with large core-counts might require passing the skew-tick=1 boot parameter to avoid lock contentions Large or moderate sized systems with numerous sockets and large core-counts can experience latency spikes due to lock contentions on xtime_lock , which is used in the timekeeping system. As a consequence, latency spikes and delays in hardware certifications might occur on multiprocessing systems. As a workaround, you can offset the timer tick per CPU to start at a different time by adding the skew_tick=1 boot parameter. To avoid lock conflicts, enable skew_tick=1 : Enable the skew_tick=1 parameter with grubby . Reboot for changes to take effect. Verify the new settings by running the cat /proc/cmdline command. Note that enabling skew_tick=1 causes a significant increase in power consumption and, therefore, it must be enabled only if you are running latency sensitive real-time workloads. (BZ#2214508) 10.9. File systems and storage Limitations of LVM writecache The writecache LVM caching method has the following limitations, which are not present in the cache method: You cannot name a writecache logical volume when using pvmove commands. You cannot use logical volumes with writecache in combination with thin pools or VDO. The following limitation also applies to the cache method: You cannot resize a logical volume while cache or writecache is attached to it. (JIRA:RHELPLAN-27987, BZ#1798631 , BZ#1808012) XFS quota warnings are triggered too often Using the quota timer results in quota warnings triggering too often, which causes soft quotas to be enforced faster than they should. To work around this problem, do not use soft quotas, which will prevent triggering warnings. As a result, the amount of warning messages will not enforce soft quota limit anymore, respecting the configured timeout. (BZ#2059262) LVM mirror devices that store a LUKS volume sometimes become unresponsive Mirrored LVM devices with a segment type of mirror that store a LUKS volume might become unresponsive under certain conditions. The unresponsive devices reject all I/O operations. To work around the issue, Red Hat recommends that you use LVM RAID 1 devices with a segment type of raid1 instead of mirror if you need to stack LUKS volumes on top of resilient software-defined storage. The raid1 segment type is the default RAID configuration type and replaces mirror as the recommended solution. To convert mirror devices to raid1 , see Converting a mirrored LVM device to a RAID1 device . (BZ#1730502) The /boot file system cannot be placed on LVM You cannot place the /boot file system on an LVM logical volume. This limitation exists for the following reasons: On EFI systems, the EFI System Partition conventionally serves as the /boot file system. The uEFI standard requires a specific GPT partition type and a specific file system type for this partition. RHEL 8 uses the Boot Loader Specification (BLS) for system boot entries. This specification requires that the /boot file system is readable by the platform firmware. On EFI systems, the platform firmware can read only the /boot configuration defined by the uEFI standard. The support for LVM logical volumes in the GRUB 2 boot loader is incomplete. Red Hat does not plan to improve the support because the number of use cases for the feature is decreasing due to standards such as uEFI and BLS. Red Hat does not plan to support /boot on LVM. Instead, Red Hat provides tools for managing system snapshots and rollback that do not need the /boot file system to be placed on an LVM logical volume. (BZ#1496229) LVM no longer allows creating volume groups with mixed block sizes LVM utilities such as vgcreate or vgextend no longer allow you to create volume groups (VGs) where the physical volumes (PVs) have different logical block sizes. LVM has adopted this change because file systems fail to mount if you extend the underlying logical volume (LV) with a PV of a different block size. To re-enable creating VGs with mixed block sizes, set the allow_mixed_block_sizes=1 option in the lvm.conf file. ( BZ#1768536 ) Using Device mapper multipath with the NVMe/TCP driver causes system instability DM multipath is not supported with the NVMe/TCP driver. Using it causes sleeping functions in the kernel to be called in an atomic context, which then results in system instability. To workaround the problem, enable native NVMe multipath. Do not use DM multipath tools. For RHEL 8, add the option nvme_core.multipath=Y to the kernel command line. (BZ#2022359) The blk-availability systemd service deactivates complex device stacks In systemd , the default block deactivation code does not always handle complex stacks of virtual block devices correctly. In some configurations, virtual devices might not be removed during the shutdown, which causes error messages to be logged. To work around this problem, deactivate complex block device stacks by executing the following command: As a result, complex virtual device stacks are correctly deactivated during shutdown and do not produce error messages. (BZ#2011699) 10.10. Dynamic programming languages, web and database servers getpwnam() might fail when called by a 32-bit application When a user of NIS uses a 32-bit application that calls the getpwnam() function, the call fails if the nss_nis.i686 package is missing. To work around this problem, manually install the missing package by using the yum install nss_nis.i686 command. ( BZ#1803161 ) MariaDB 10.5 does not warn about dropping a non-existent table when the OQGraph plug-in is enabled When the OQGraph storage engine plug-in is loaded to the MariaDB 10.5 server, MariaDB does not warn about dropping a non-existent table. In particular, when the user attempts to drop a non-existent table using the DROP TABLE or DROP TABLE IF EXISTS SQL commands, MariaDB neither returns an error message nor logs a warning. Note that the OQGraph plug-in is provided by the mariadb-oqgraph-engine package, which is not installed by default. ( BZ#1944653 ) PAM plug-in version 1.0 does not work in MariaDB MariaDB 10.3 provides the Pluggable Authentication Modules (PAM) plug-in version 1.0. MariaDB 10.5 provides the plug-in versions 1.0 and 2.0, version 2.0 is the default. The MariaDB PAM plug-in version 1.0 does not work in RHEL 8. To work around this problem, use the PAM plug-in version 2.0 provided by the mariadb:10.5 module stream. ( BZ#1942330 ) Symbol conflicts between OpenLDAP libraries might cause crashes in httpd When both the libldap and libldap_r libraries provided by OpenLDAP are loaded and used within a single process, symbol conflicts between these libraries might occur. Consequently, Apache httpd child processes using the PHP ldap extension might terminate unexpectedly if the mod_security or mod_auth_openidc modules are also loaded by the httpd configuration. Since the RHEL 8.3 update to the Apache Portable Runtime (APR) library, you can work around the problem by setting the APR_DEEPBIND environment variable, which enables the use of the RTLD_DEEPBIND dynamic linker option when loading httpd modules. When the APR_DEEPBIND environment variable is enabled, crashes no longer occur in httpd configurations that load conflicting libraries. (BZ#1819607) 10.11. Identity Management Windows Server 2008 R2 and earlier no longer supported In RHEL 8.4 and later, Identity Management (IdM) does not support establishing trust to Active Directory with Active Directory domain controllers running Windows Server 2008 R2 or earlier versions. RHEL IdM now requires SMB encryption when establishing the trust relationship, which is only available with Windows Server 2012 or later. ( BZ#1971061 ) Using the cert-fix utility with the --agent-uid pkidbuser option breaks Certificate System Using the cert-fix utility with the --agent-uid pkidbuser option corrupts the LDAP configuration of Certificate System. As a consequence, Certificate System might become unstable and manual steps are required to recover the system. ( BZ#1729215 ) The /var/log/lastlog sparse file on IdM hosts can cause performance problems During the IdM installation, a range of 200,000 UIDs from a total of 10,000 possible ranges is randomly selected and assigned. Selecting a random range in this way significantly reduces the probability of conflicting IDs in case you decide to merge two separate IdM domains in the future. However, having high UIDs can create problems with the /var/log/lastlog file. For example, if a user with the UID of 1280000008 logs in to an IdM client, the local /var/log/lastlog file size increases to almost 400 GB. Although the actual file is sparse and does not use all that space, certain applications are not designed to identify sparse files by default and may require a specific option to handle them. For example, if the setup is complex and a backup and copy application does not handle sparse files correctly, the file is copied as if its size was 400 GB. This behavior can cause performance problems. To work around this problem: In case of a standard package, refer to its documentation to identify the option that handles sparse files. In case of a custom application, ensure that it is able to manage sparse files such as /var/log/lastlog correctly. (JIRA:RHELPLAN-59111) FIPS mode does not support using a shared secret to establish a cross-forest trust Establishing a cross-forest trust using a shared secret fails in FIPS mode because NTLMSSP authentication is not FIPS-compliant. To work around this problem, authenticate with an Active Directory (AD) administrative account when establishing a trust between an IdM domain with FIPS mode enabled and an AD domain. ( BZ#1924707 ) FreeRADIUS server fails to run in FIPS mode By default, in FIPS mode, OpenSSL disables the use of the MD5 digest algorithm. As the RADIUS protocol requires MD5 to encrypt a secret between the RADIUS client and the RADIUS server, this causes the FreeRADIUS server to fail in FIPS mode. To work around this problem, follow these steps: Procedure Create the environment variable, RADIUS_MD5_FIPS_OVERRIDE for the radiusd service: To apply the change, reload the systemd configuration and start the radiusd service: To run FreeRADIUS in debug mode: Note that though FreeRADIUS can run in FIPS mode, this does not mean that it is FIPS compliant as it uses weak ciphers and functions when in FIPS mode. For more information on configuring FreeRADIUS authentication in FIPS mode, see How to configure FreeRADIUS authentication in FIPS mode . ( BZ#1958979 ) Actions required when running Samba as a print server and updating from RHEL 8.4 and earlier With this update, the samba package no longer creates the /var/spool/samba/ directory. If you use Samba as a print server and use /var/spool/samba/ in the [printers] share to spool print jobs, SELinux prevents Samba users from creating files in this directory. Consequently, print jobs fail and the auditd service logs a denied message in /var/log/audit/audit.log . To avoid this problem after updating your system from 8.4 and earlier: Search the [printers] share in the /etc/samba/smb.conf file. If the share definition contains path = /var/spool/samba/ , update the setting and set the path parameter to /var/tmp/ . Restart the smbd service: If you newly installed Samba on RHEL 8.5 or later, no action is required. The default /etc/samba/smb.conf file provided by the samba-common package in this case already uses the /var/tmp/ directory to spool print jobs. (BZ#2009213) Downgrading authselect after the rebase to version 1.2.2 breaks system authentication The authselect package has been rebased to the latest upstream version 1.2.2 . Downgrading authselect is not supported and breaks system authentication for all users, including root . If you downgraded the authselect package to 1.2.1 or earlier, perform the following steps to work around this problem: At the GRUB boot screen, select Red Hat Enterprise Linux with the version of the kernel that you want to boot and press e to edit the entry. Type single as a separate word at the end of the line that starts with linux and press Ctrl+X to start the boot process. Upon booting in single-user mode, enter the root password. Restore authselect configuration using the following command: ( BZ#1892761 ) The default keyword for enabled ciphers in the NSS does not work in conjunction with other ciphers In Directory Server you can use the default keyword to refer to the default ciphers enabled in the network security services (NSS). However, if you want to enable the default ciphers and additional ones using the command line or web console, Directory Server fails to resolve the default keyword. As a consequence, the server enables only the additionally specified ciphers and logs the following error: As a workaround, specify all ciphers that are enabled by default in NSS including the ones you want to additionally enable. ( BZ#1817505 ) Potential risk when using the default value for ldap_id_use_start_tls option When using ldap:// without TLS for identity lookups, it can pose a risk for an attack vector. Particularly a man-in-the-middle (MITM) attack which could allow an attacker to impersonate a user by altering, for example, the UID or GID of an object returned in an LDAP search. Currently, the SSSD configuration option to enforce TLS, ldap_id_use_start_tls , defaults to false . Ensure that your setup operates in a trusted environment and decide if it is safe to use unencrypted communication for id_provider = ldap . Note id_provider = ad and id_provider = ipa are not affected as they use encrypted connections protected by SASL and GSSAPI. If it is not safe to use unencrypted communication, enforce TLS by setting the ldap_id_use_start_tls option to true in the /etc/sssd/sssd.conf file. The default behavior is planned to be changed in a future release of RHEL. (JIRA:RHELPLAN-155168) SSSD retrieves incomplete list of members if the group size exceeds 1500 members During the integration of SSSD with Active Directory, SSSD retrieves incomplete group member lists when the group size exceeds 1500 members. This issue occurs because Active Directory's MaxValRange policy, which restricts the number of members retrievable in a single query, is set to 1500 by default. To work around this problem, change the MaxValRange setting in Active Directory to accommodate larger group sizes. (JIRA:RHELDOCS-19603) 10.12. Desktop Disabling flatpak repositories from Software Repositories is not possible Currently, it is not possible to disable or remove flatpak repositories in the Software Repositories tool in the GNOME Software utility. ( BZ#1668760 ) Generation 2 RHEL 8 virtual machines sometimes fail to boot on Hyper-V Server 2016 hosts When using RHEL 8 as the guest operating system on a virtual machine (VM) running on a Microsoft Hyper-V Server 2016 host, the VM in some cases fails to boot and returns to the GRUB boot menu. In addition, the following error is logged in the Hyper-V event log: This error occurs due to a UEFI firmware bug on the Hyper-V host. To work around this problem, use Hyper-V Server 2019 or later as the host. (BZ#1583445) Drag-and-drop does not work between desktop and applications Due to a bug in the gnome-shell-extensions package, the drag-and-drop functionality does not currently work between desktop and applications. Support for this feature will be added back in a future release. ( BZ#1717947 ) 10.13. Graphics infrastructures radeon fails to reset hardware correctly The radeon kernel driver currently does not reset hardware in the kexec context correctly. Instead, radeon falls over, which causes the rest of the kdump service to fail. To work around this problem, disable radeon in kdump by adding the following line to the /etc/kdump.conf file: Restart the machine and kdump . After starting kdump , the force_rebuild 1 line may be removed from the configuration file. Note that in this scenario, no graphics will be available during kdump , but kdump will work successfully. (BZ#1694705) Multiple HDR displays on a single MST topology may not power on On systems using NVIDIA Turing GPUs with the nouveau driver, using a DisplayPort hub (such as a laptop dock) with multiple monitors which support HDR plugged into it may result in failure to turn on. This is due to the system erroneously thinking there is not enough bandwidth on the hub to support all of the displays. (BZ#1812577) GUI in ESXi might crash due to low video memory The graphical user interface (GUI) on RHEL virtual machines (VMs) in the VMware ESXi 7.0.1 hypervisor with vCenter Server 7.0.1 requires a certain amount of video memory. If you connect multiple consoles or high-resolution monitors to the VM, the GUI requires at least 16 MB of video memory. If you start the GUI with less video memory, the GUI might terminate unexpectedly. To work around the problem, configure the hypervisor to assign at least 16 MB of video memory to the VM. As a result, the GUI on the VM no longer crashes. If you encounter this issue, Red Hat recommends that you report it to VMware. See also the following VMware article: VMs with high resolution VM console may experience a crash on ESXi 7.0.1 (83194) . (BZ#1910358) VNC Viewer displays wrong colors with the 16-bit color depth on IBM Z The VNC Viewer application displays wrong colors when you connect to a VNC session on an IBM Z server with the 16-bit color depth. To work around the problem, set the 24-bit color depth on the VNC server. With the Xvnc server, replace the -depth 16 option with -depth 24 in the Xvnc configuration. As a result, VNC clients display the correct colors but use more network bandwidth with the server. ( BZ#1886147 ) Unable to run graphical applications using sudo command When trying to run graphical applications as a user with elevated privileges, the application fails to open with an error message. The failure happens because Xwayland is restricted by the Xauthority file to use regular user credentials for authentication. To work around this problem, use the sudo -E command to run graphical applications as a root user. ( BZ#1673073 ) Hardware acceleration is not supported on ARM Built-in graphics drivers do not support hardware acceleration or the Vulkan API on the 64-bit ARM architecture. To enable hardware acceleration or Vulkan on ARM, install the proprietary Nvidia driver. (JIRA:RHELPLAN-57914) 10.14. The web console Removing USB host devices using the web console does not work as expected When you attach a USB device to a virtual machine (VM), the device number and bus number of the USB device might change after they are passed to the VM. As a consequence, using the web console to remove such devices fails due to the incorrect correlation of the device and bus numbers. To workaround this problem, remove the <hostdev> part of the USB device, from the VM's XML configuration. (JIRA:RHELPLAN-109067) Attaching multiple host devices using the web console does not work When you select multiple devices to attach to a virtual machine (VM) using the web console, only a single device is attached and the rest are ignored. To work around this problem, attach only one device at a time. (JIRA:RHELPLAN-115603) 10.15. Red Hat Enterprise Linux system roles Unable to manage localhost by using the localhost hostname in the playbook or inventory With the inclusion of the ansible-core 2.12 package in RHEL, if you are running Ansible on the same host you manage your nodes, you cannot do it by using the localhost hostname in your playbook or inventory. This happens because ansible-core 2.12 uses the python38 module, and many of the libraries are missing, for example, blivet for the storage role, gobject for the network role. To workaround this problem, if you are already using the localhost hostname in your playbook or inventory, you can add a connection, by using ansible_connection=local , or by creating an inventory file that lists localhost with the ansible_connection=local option. With that, you are able to manage resources on localhost . For more details, see the article RHEL system roles playbooks fail when run on localhost . ( BZ#2041997 ) 10.16. Virtualization Network traffic performance in virtual machines might be reduced In some cases, RHEL 8.6 guest virtual machines (VMs) have somewhat decreased performance when handling high levels of network traffic. (BZ#2069047) Using a large number of queues might cause Windows virtual machines to fail Windows virtual machines (VMs) might fail when the virtual Trusted Platform Module (vTPM) device is enabled and the multi-queue virtio-net feature is configured to use more than 250 queues. This problem is caused by a limitation in the vTPM device. The vTPM device has a hardcoded limit on the maximum number of opened file descriptors. Since multiple file descriptors are opened for every new queue, the internal vTPM limit can be exceeded, causing the VM to fail. To work around this problem, choose one of the following two options: Keep the vTPM device enabled, but use less than 250 queues. Disable the vTPM device to use more than 250 queues. ( BZ#2020133 ) Live post-copy migration of VMs with failover VFs does not work Currently, attempting to post-copy migrate a running virtual machine (VM) fails if the VM uses a device with the virtual function (VF) failover capability enabled. To work around the problem, use the standard migration type, rather than post-copy migration. (BZ#2054656) Live migrating VMs to a RHEL 8.6 Intel host from an earlier minor version of RHEL 8 does not work Because the Intel Transactional Synchronization Extensions (TSX) feature has become deprecated, RHEL 8.6 hosts on Intel hardware no longer use the hle and rtm CPU flags. As a consequence, live migrating a virtual machine (VM) to a RHEL 8.6 host fails if the source host uses RHEL 8.5 or an earlier minor version of RHEL 8. For more information on TSX deprecation, see the Red Hat KnowledgeBase . (BZ#2134184) The Milan VM CPU type is sometimes not available on AMD Milan systems On certain AMD Milan systems, the Enhanced REP MOVSB ( erms ) and Fast Short REP MOVSB ( fsrm ) feature flags are disabled in the BIOS by default. Consequently, the 'Milan' CPU type might not be available on these systems. In addition, VM live migration between Milan hosts with different feature flag settings might fail. To work around these problems, manually turn on erms and fsrm in the BIOS of your host. (BZ#2077770) Attaching LUN devices to virtual machines using virtio-blk does not work The q35 machine type does not support transitional virtio 1.0 devices, and RHEL 8 therefore lacks support for features that were deprecated in virtio 1.0. In particular, it is not possible on a RHEL 8 host to send SCSI commands from virtio-blk devices. As a consequence, attaching a physical disk as a LUN device to a virtual machine fails when using the virtio-blk controller. Note that physical disks can still be passed through to the guest operating system, but they should be configured with the device='disk' option rather than device='lun' . (BZ#1777138) Virtual machines with iommu_platform=on fail to start on IBM POWER RHEL 8 currently does not support the iommu_platform=on parameter for virtual machines (VMs) on IBM POWER system. As a consequence, starting a VM with this parameter on IBM POWER hardware results in the VM becoming unresponsive during the boot process. ( BZ#1910848 ) IBM POWER hosts may crash when using the ibmvfc driver When running RHEL 8 on a PowerVM logical partition (LPAR), a variety of errors may currently occur due to problems with the ibmvfc driver. As a consequence, the host's kernel may panic under certain circumstances, such as: Using the Live Partition Mobility (LPM) feature Resetting a host adapter Using SCSI error handling (SCSI EH) functions (BZ#1961722) Using perf kvm record on IBM POWER Systems can cause the VM to crash When using a RHEL 8 host on the little-endian variant of IBM POWER hardware, using the perf kvm record command to collect trace event samples for a KVM virtual machine (VM) in some cases results in the VM becoming unresponsive. This situation occurs when: The perf utility is used by an unprivileged user, and the -p option is used to identify the VM - for example perf kvm record -e trace_cycles -p 12345 . The VM was started using the virsh shell. To work around this problem, use the perf kvm utility with the -i option to monitor VMs that were created using the virsh shell. For example: Note that when using the -i option, child tasks do not inherit counters, and threads will therefore not be monitored. (BZ#1924016) Windows Server 2016 virtual machines with Hyper-V enabled fail to boot when using certain CPU models Currently, it is not possible to boot a virtual machine (VM) that uses Windows Server 2016 as the guest operating system, has the Hyper-V role enabled, and uses one of the following CPU models: EPYC-IBPB EPYC To work around this problem, use the EPYC-v3 CPU model, or manually enable the xsaves CPU flag for the VM. (BZ#1942888) Migrating a POWER9 guest from a RHEL 7-ALT host to RHEL 8 fails Currently, migrating a POWER9 virtual machine from a RHEL 7-ALT host system to RHEL 8 becomes unresponsive with a Migration status: active status. To work around this problem, disable Transparent Huge Pages (THP) on the RHEL 7-ALT host, which enables the migration to complete successfully. (BZ#1741436) Using virt-customize sometimes causes guestfs-firstboot to fail After modifying a virtual machine (VM) disk image using the virt-customize utility, the guestfs-firstboot service in some cases fails due to incorrect SELinux permissions. This causes a variety of problems during VM startup, such as failing user creation or system registration. To avoid this problem, add the --selinux-relabel option to the virt-customize command. ( BZ#1554735 ) Deleting a forward interface from a macvtap virtual network resets all connection counts of this network Currently, deleting a forward interface from a macvtap virtual network with multiple forward interfaces also resets the connection status of the other forward interfaces of the network. As a consequence, the connection information in the live network XML is incorrect. Note, however, that this does not affect the functionality of the virtual network. To work around the issue, restart the libvirtd service on your host. ( BZ#1332758 ) Virtual machines with SLOF fail to boot in netcat interfaces When using a netcat ( nc ) interface to access the console of a virtual machine (VM) that is currently waiting at the Slimline Open Firmware (SLOF) prompt, the user input is ignored and VM stays unresponsive. To work around this problem, use the nc -C option when connecting to the VM, or use a telnet interface instead. (BZ#1974622) Attaching mediated devices to virtual machines in virt-manager in some cases fails The virt-manager application is currently able to detect mediated devices, but cannot recognize whether the device is active. As a consequence, attempting to attach an inactive mediated device to a running virtual machine (VM) using virt-manager fails. Similarly, attempting to create a new VM that uses an inactive mediated device fails with a device not found error. To work around this issue, use the virsh nodedev-start or mdevctl start commands to activate the mediated device before using it in virt-manager . ( BZ#2026985 ) RHEL 9 virtual machines fail to boot in POWER8 compatibility mode Currently, booting a virtual machine (VM) that runs RHEL 9 as its guest operating system fails if the VM also uses CPU configuration similar to the following: To work around this problem, do not use POWER8 compatibility mode in RHEL 9 VMs. In addition, note that running RHEL 9 VMs is not possible on POWER8 hosts. ( BZ#2035158 ) Virtual machines sometimes fail to start when using many virtio-blk disks Adding a large number of virtio-blk devices to a virtual machine (VM) may exhaust the number of interrupt vectors available in the platform. If this occurs, the VM's guest OS fails to boot, and displays a dracut-initqueue[392]: Warning: Could not boot error. ( BZ#1719687 ) Windows Server 2022 guests in some cases boot very slowly on AMD Milan Virtual machines (VMs) that use the Windows Server 2022 guest operating system and the qemu64 CPU model currently take a very long time to boot on hosts with an AMD EPYC 7003 series processor (also known as AMD Milan ). To work work around the problem, do not use qemu64 as the CPU model, because it is an unsupported setting for VMs in RHEL 8. For example, use the host-model CPU instead. ( BZ#2012373 ) SMT CPU topology is not detected by VMs when using host passthrough mode on AMD EPYC When a virtual machine (VM) boots with the CPU host passthrough mode on an AMD EPYC host, the TOPOEXT CPU feature flag is not present. Consequently, the VM is not able to detect a virtual CPU topology with multiple threads per core. To work around this problem, boot the VM with the EPYC CPU model instead of host passthrough. ( BZ#1740002 ) Migrating VMs to a later z-stream versions of RHEL 8.6 sometimes fails Attempting to migrate a virtual machine (VM) fails if the destination host uses a RHEL 8.6 version with the qemu-kvm package version 6.2.0-11.module+el8.6.0+21121 and later, and the source host uses qemu-kvm version 6.2.0-11.module+el8.6.0+21120 or earlier on RHEL 8.6. (JIRA:RHELDOCS-17666) 10.17. RHEL in cloud environments SR-IOV performs suboptimally in ARM 64 RHEL 8 virtual machines on Azure Currently, SR-IOV networking devices have significantly lower throughout and higher latency than expected in ARM 64 RHEL 8 virtual machines (VMs) running on a Microsoft Azure platform. (BZ#2068429) Setting static IP in a RHEL 8 virtual machine on a VMware host does not work Currently, when using RHEL 8 as a guest operating system of a virtual machine (VM) on a VMware host, the DatasourceOVF function does not work correctly. As a consequence, if you use the cloud-init utility to set the VM's network to static IP and then reboot the VM, the VM's network will be changed to DHCP. ( BZ#1750862 ) kdump sometimes does not start on Azure and Hyper-V On RHEL 8 guest operating systems hosted on the Microsoft Azure or Hyper-V hypervisors, starting the kdump kernel in some cases fails when post-exec notifiers are enabled. To work around this problem, disable crash kexec post notifiers: (BZ#1865745) The SCSI host address sometimes changes when booting a Hyper-V VM with multiple guest disks Currently, when booting a RHEL 8 virtual machine (VM) on the Hyper-V hypervisor, the host portion of the Host, Bus, Target, Lun (HBTL) SCSI address in some cases changes. As a consequence, automated tasks set up with the HBTL SCSI identification or device node in the VM do not work consistently. This occurs if the VM has more than one disk or if the disks have different sizes. To work around the problem, modify your kickstart files, using one of the following methods: Method 1: Use persistent identifiers for SCSI devices. You can use for example the following powershell script to determine the specific device identifiers: You can use this script on the hyper-v host, for example as follows: Afterwards, the disk values can be used in the kickstart file, for example as follows: As these values are specific for each virtual disk, the configuration needs to be done for each VM instance. It may, therefore, be useful to use the %include syntax to place the disk information into a separate file. Method 2: Set up device selection by size. A kickstart file that configures disk selection based on size must include lines similar to the following: (BZ#1906870) Starting a RHEL 8 virtual machine on AWS using cloud-init takes longer than expected Currently, initializing an EC2 instance of RHEL 8 using the cloud-init service on Amazon Web Services (AWS) takes an excessive amount of time. To avoid this problem, remove the /etc/resolv.conf file from the image you are using for VM creation before uploading the image to AWS. (BZ#1862930) 10.18. Supportability The getattachment command fails to download multiple attachments The getattachment command is able to download only a single attachment, but fails to download multiple attachments. As a workaround, you can download multiple attachments one by one by passing the case number and UUID for each attachment in the getattachment command. ( BZ#2064575 ) redhat-support-tool does not work with the FUTURE crypto policy Because a cryptographic key used by a certificate on the Customer Portal API does not meet the requirements by the FUTURE system-wide cryptographic policy, the redhat-support-tool utility does not work with this policy level at the moment. To work around this problem, use the DEFAULT crypto policy while connecting to the Customer Portal API. ( BZ#1802026 ) Timeout when running sos report on IBM Power Systems, Little Endian When running the sos report command on IBM Power Systems, Little Endian with hundreds or thousands of CPUs, the processor plugin reaches its default timeout of 300 seconds when collecting huge content of the /sys/devices/system/cpu directory. As a workaround, increase the plugin's timeout accordingly: For one-time setting, run: For a permanent change, edit the [plugin_options] section of the /etc/sos/sos.conf file: The example value is set to 1800. The particular timeout value highly depends on a specific system. To set the plugin's timeout appropriately, you can first estimate the time needed to collect the one plugin with no timeout by running the following command: (BZ#2011413) 10.19. Containers Running systemd within an older container image does not work Running systemd within an older container image, for example, centos:7 , does not work: To work around this problem, use the following commands: (JIRA:RHELPLAN-96940) Container images signed with a Beta GPG key can not be pulled Currently, when you try to pull RHEL Beta container images, podman exits with the error message: Error: Source image rejected: None of the signatures were accepted . The images fail to be pulled due to current builds being configured to not trust the RHEL Beta GPG keys by default. As a workaround, ensure that the Red Hat Beta GPG key is stored on your local system and update the existing trust scope with the podman image trust set command for the appropriate beta namespace. If you do not have the Beta GPG key stored locally, you can pull it by running the following command: To add the Beta GPG key as trusted to your namespace, use one of the following commands: and Replace namespace with ubi9-beta or rhel9-beta . ( BZ#2020301 ) | [
"%pre wipefs -a /dev/sda %end",
"yum module enable libselinux-python yum install libselinux-python",
"yum module install libselinux-python:2.8/common",
"update-crypto-policies --set DEFAULT:NO-CAMELLIA",
"app pkcs15-init { framework pkcs15 { use_file_caching = false; } }",
"package xorg-x11-server-common has been added to the list of excluded packages, but it can't be removed from the current software selection without breaking the installation.",
"Title: Set SSH Client Alive Count Max to zero CCE Identifier: CCE-83405-1 Rule ID: xccdf_org.ssgproject.content_rule_sshd_set_keepalive_0 STIG ID: RHEL-08-010200 Title: Set SSH Idle Timeout Interval CCE Identifier: CCE-80906-1 Rule ID: xccdf_org.ssgproject.content_rule_sshd_set_idle_timeout STIG ID: RHEL-08-010201",
"NONE:+VERS-ALL:-VERS-TLS1.3:+MAC-ALL:+DHE-RSA:+AES-256-GCM:+SIGN-RSA-SHA384:+COMP-ALL:+GROUP-ALL",
"NONE:+VERS-ALL:-VERS-TLS1.3:+MAC-ALL:+ECDHE-RSA:+AES-128-CBC:+SIGN-RSA-SHA1:+COMP-ALL:+GROUP-ALL",
"dnf install -y ansible-core scap-security-guide rhc-worker-playbook",
"cd /usr/share/scap-security-guide/ansible",
"ANSIBLE_COLLECTIONS_PATH=/usr/share/rhc-worker-playbook/ansible/collections/ansible_collections/ ansible-playbook -c local -i localhost, rhel9-playbook- cis_server_l1 .yml",
"systemctl disable --now nm-cloud-setup.service nm-cloud-setup.timer",
"nmcli connection show",
"nmcli connection up \"<profile_name>\"",
"IPv6_rpfilter=no",
"[Match] Architecture=s390x KernelCommandLine=!net.naming-scheme=rhel-8.7 [Link] NamePolicy=kernel database slot path AlternativeNamesPolicy=database slot path MACAddressPolicy=persistent",
"cgroup: cgroup: disabling cgroup2 socket matching due to net_prio or net_cls activation",
"systemctl restart kdump.service",
"[ 2.817152] acpi PNP0A08:00: [Firmware Bug]: ECAM area [mem 0x30000000-0x31ffffff] not reserved in ACPI namespace [ 2.827911] acpi PNP0A08:00: ECAM at [mem 0x30000000-0x31ffffff] for [bus 00-1f]",
"03:00.0 Non-Volatile memory controller: Sandisk Corp WD Black 2018/PC SN720 NVMe SSD (prog-if 02 [NVM Express]) Capabilities: [900 v1] L1 PM Substates L1SubCap: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2+ ASPM_L1.1- L1_PM_Substates+ PortCommonModeRestoreTime=255us PortTPowerOnTime=10us L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1- T_CommonMode=0us LTR1.2_Threshold=0ns L1SubCtl2: T_PwrOn=10us",
"KDUMP_COMMANDLINE_REMOVE=\"hugepages hugepagesz slub_debug quiet log_buf_len swiotlb\"",
"KDUMP_COMMANDLINE_APPEND=\"irqpoll nr_cpus=1 reset_devices cgroup_disable=memory udev.children-max=2 panic=10 swiotlb=noforce novmcoredd\"",
"systemctl restart kdump",
"-mca btl openib -mca pml ucx -x UCX_NET_DEVICES=mlx5_ib0",
"-mca pml_ucx_priority 5",
"sfboot vf-msix-limit=2",
"grubby --update-kernel=ALL --args=\"skew_tick=1\"",
"systemctl enable --now blk-availability.service",
"systemctl edit radiusd [Service] Environment=RADIUS_MD5_FIPS_OVERRIDE=1",
"systemctl daemon-reload systemctl start radiusd",
"RADIUS_MD5_FIPS_OVERRIDE=1 radiusd -X",
"systemctl restart smbd",
"authselect select sssd --force",
"Security Initialization - SSL alert: Failed to set SSL cipher preference information: invalid ciphers <default,+__cipher_name__>: format is +cipher1,-cipher2... (Netscape Portable Runtime error 0 - no error)",
"The guest operating system reported that it failed with the following error code: 0x1E",
"dracut_args --omit-drivers \"radeon\" force_rebuild 1",
"perf kvm record -e trace_imc/trace_cycles/ -p <guest pid> -i",
"<cpu mode=\"host-model\"> <model>power8</model> </cpu>",
"echo N > /sys/module/kernel/parameters/crash_kexec_post_notifiers",
"Output what the /dev/disk/by-id/<value> for the specified hyper-v virtual disk. Takes a single parameter which is the virtual disk file. Note: kickstart syntax works with and without the /dev/ prefix. param ( [Parameter(Mandatory=USDtrue)][string]USDvirtualdisk ) USDwhat = Get-VHD -Path USDvirtualdisk USDpart = USDwhat.DiskIdentifier.ToLower().split('-') USDp = USDpart[0] USDs0 = USDp[6] + USDp[7] + USDp[4] + USDp[5] + USDp[2] + USDp[3] + USDp[0] + USDp[1] USDp = USDpart[1] USDs1 = USDp[2] + USDp[3] + USDp[0] + USDp[1] [string]::format(\"/dev/disk/by-id/wwn-0x60022480{0}{1}{2}\", USDs0, USDs1, USDpart[4])",
"PS C:\\Users\\Public\\Documents\\Hyper-V\\Virtual hard disks> .\\by-id.ps1 .\\Testing_8\\disk_3_8.vhdx /dev/disk/by-id/wwn-0x60022480e00bc367d7fd902e8bf0d3b4 PS C:\\Users\\Public\\Documents\\Hyper-V\\Virtual hard disks> .\\by-id.ps1 .\\Testing_8\\disk_3_9.vhdx /dev/disk/by-id/wwn-0x600224807270e09717645b1890f8a9a2",
"part / --fstype=xfs --grow --asprimary --size=8192 --ondisk=/dev/disk/by-id/wwn-0x600224807270e09717645b1890f8a9a2 part /home --fstype=\"xfs\" --grow --ondisk=/dev/disk/by-id/wwn-0x60022480e00bc367d7fd902e8bf0d3b4",
"Disk partitioning information is supplied in a file to kick start %include /tmp/disks Partition information is created during install using the %pre section %pre --interpreter /bin/bash --log /tmp/ks_pre.log # Dump whole SCSI/IDE disks out sorted from smallest to largest ouputting # just the name disks=(`lsblk -n -o NAME -l -b -x SIZE -d -I 8,3`) || exit 1 # We are assuming we have 3 disks which will be used # and we will create some variables to represent d0=USD{disks[0]} d1=USD{disks[1]} d2=USD{disks[2]} echo \"part /home --fstype=\"xfs\" --ondisk=USDd2 --grow\" >> /tmp/disks echo \"part swap --fstype=\"swap\" --ondisk=USDd0 --size=4096\" >> /tmp/disks echo \"part / --fstype=\"xfs\" --ondisk=USDd1 --grow\" >> /tmp/disks echo \"part /boot --fstype=\"xfs\" --ondisk=USDd1 --size=1024\" >> /tmp/disks %end",
"sos report -k processor.timeout=1800",
"Specify any plugin options and their values here. These options take the form plugin_name.option_name = value #rpm.rpmva = off processor.timeout = 1800",
"time sos report -o processor -k processor.timeout=0 --batch --build",
"podman run --rm -ti centos:7 /usr/lib/systemd/systemd Storing signatures Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted [!!!!!!] Failed to mount API filesystems, freezing.",
"mkdir /sys/fs/cgroup/systemd mount none -t cgroup -o none,name=systemd /sys/fs/cgroup/systemd podman run --runtime /usr/bin/crun --annotation=run.oci.systemd.force_cgroup_v1=/sys/fs/cgroup --rm -ti centos:7 /usr/lib/systemd/systemd",
"sudo wget -O /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta https://www.redhat.com/security/data/f21541eb.txt",
"sudo podman image trust set -f /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta registry.access.redhat.com/ namespace",
"sudo podman image trust set -f /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta registry.redhat.io/ namespace"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.6_release_notes/known-issues |
Subsets and Splits