title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
β | url
stringlengths 79
342
|
---|---|---|---|
Configuring and deploying Gateway policies with Connectivity Link | Configuring and deploying Gateway policies with Connectivity Link Red Hat Connectivity Link 1.0 Secure, protect, and connect APIs on OpenShift Red Hat Connectivity Link documentation team | null | https://docs.redhat.com/en/documentation/red_hat_connectivity_link/1.0/html/configuring_and_deploying_gateway_policies_with_connectivity_link/index |
Chapter 100. Facebook Component | Chapter 100. Facebook Component Available as of Camel version 2.14 The Facebook component provides access to all of the Facebook APIs accessible using Facebook4J . It allows producing messages to retrieve, add, and delete posts, likes, comments, photos, albums, videos, photos, checkins, locations, links, etc. It also supports APIs that allow polling for posts, users, checkins, groups, locations, etc. Facebook requires the use of OAuth for all client application authentication. In order to use camel-facebook with your account, you'll need to create a new application within Facebook at https://developers.facebook.com/apps and grant the application access to your account. The Facebook application's id and secret will allow access to Facebook APIs which do not require a current user. A user access token is required for APIs that require a logged in user. More information on obtaining a user access token can be found at https://developers.facebook.com/docs/facebook-login/access-tokens/ . Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-facebook</artifactId> <version>USD{camel-version}</version> </dependency> 100.1. URI format facebook://[endpoint]?[options] 100.2. FacebookComponent The facebook component can be configured with the Facebook account settings below, which are mandatory. The values can be provided to the component using the bean property configuration of type org.apache.camel.component.facebook.config.FacebookConfiguration . The oAuthAccessToken option may be ommited but that will only allow access to application APIs. The Facebook component supports 2 options, which are listed below. Name Description Default Type configuration (advanced) To use the shared configuration FacebookConfiguration resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Facebook endpoint is configured using URI syntax: with the following path and query parameters: 100.2.1. Path Parameters (1 parameters): Name Description Default Type methodName Required What operation to perform String 100.2.2. Query Parameters (102 parameters): Name Description Default Type achievementURL (common) The unique URL of the achievement URL albumId (common) The album ID String albumUpdate (common) The facebook Album to be created or updated AlbumUpdate appId (common) The ID of the Facebook Application String center (common) Location latitude and longitude GeoLocation checkinId (common) The checkin ID String checkinUpdate (common) Deprecated The checkin to be created. Deprecated, instead create a Post with an attached location CheckinUpdate clientURL (common) Facebook4J API client URL String clientVersion (common) Facebook4J client API version String commentId (common) The comment ID String commentUpdate (common) The facebook Comment to be created or updated CommentUpdate debugEnabled (common) Enables deubg output. Effective only with the embedded logger false Boolean description (common) The description text String distance (common) Distance in meters Integer domainId (common) The domain ID String domainName (common) The domain name String domainNames (common) The domain names List eventId (common) The event ID String eventUpdate (common) The event to be created or updated EventUpdate friendId (common) The friend ID String friendlistId (common) The friend list ID String friendlistName (common) The friend list Name String friendUserId (common) The friend user ID String groupId (common) The group ID String gzipEnabled (common) Use Facebook GZIP encoding true Boolean httpConnectionTimeout (common) Http connection timeout in milliseconds 20000 Integer httpDefaultMaxPerRoute (common) HTTP maximum connections per route 2 Integer httpMaxTotalConnections (common) HTTP maximum total connections 20 Integer httpReadTimeout (common) Http read timeout in milliseconds 120000 Integer httpRetryCount (common) Number of HTTP retries 0 Integer httpRetryIntervalSeconds (common) HTTP retry interval in seconds 5 Integer httpStreamingReadTimeout (common) HTTP streaming read timeout in milliseconds 40000 Integer ids (common) The ids of users List inBody (common) Sets the name of a parameter to be passed in the exchange In Body String includeRead (common) Enables notifications that the user has already read in addition to unread ones Boolean isHidden (common) Whether hidden Boolean jsonStoreEnabled (common) If set to true, raw JSON forms will be stored in DataObjectFactory false Boolean link (common) Link URL URL linkId (common) Link ID String locale (common) Desired FQL locale Locale mbeanEnabled (common) If set to true, Facebook4J mbean will be registerd false Boolean message (common) The message text String messageId (common) The message ID String metric (common) The metric name String milestoneId (common) The milestone id String name (common) Test user name, must be of the form 'first last' String noteId (common) The note ID String notificationId (common) The notification ID String objectId (common) The insight object ID String offerId (common) The offer id String optionDescription (common) The question's answer option description String pageId (common) The page id String permissionName (common) The permission name String permissions (common) Test user permissions in the format perm1,perm2,... String photoId (common) The photo ID String pictureId (common) The picture id Integer pictureId2 (common) The picture2 id Integer pictureSize (common) The picture size PictureSize placeId (common) The place ID String postId (common) The post ID String postUpdate (common) The post to create or update PostUpdate prettyDebugEnabled (common) Prettify JSON debug output if set to true false Boolean queries (common) FQL queries Map query (common) FQL query or search terms for search endpoints String questionId (common) The question id String reading (common) Optional reading parameters. See Reading Options(#reading) Reading readingOptions (common) To configure Reading using key/value pairs from the Map. Map restBaseURL (common) API base URL https://graph.facebook.com/ String scoreValue (common) The numeric score with value Integer size (common) The picture size, one of large, normal, small or square PictureSize source (common) The media content from either a java.io.File or java.io.Inputstream Media subject (common) The note of the subject String tabId (common) The tab id String tagUpdate (common) Photo tag information TagUpdate testUser1 (common) Test user 1 TestUser testUser2 (common) Test user 2 TestUser testUserId (common) The ID of the test user String title (common) The title text String toUserId (common) The ID of the user to tag String toUserIds (common) The IDs of the users to tag List userId (common) The Facebook user ID String userId1 (common) The ID of a user 1 String userId2 (common) The ID of a user 2 String userIds (common) The IDs of users to invite to event List userLocale (common) The test user locale String useSSL (common) Use SSL true Boolean videoBaseURL (common) Video API base URL https://graph-video.facebook.com/ String videoId (common) The video ID String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean httpProxyHost (proxy) HTTP proxy server host name String httpProxyPassword (proxy) HTTP proxy server password String httpProxyPort (proxy) HTTP proxy server port Integer httpProxyUser (proxy) HTTP proxy server user name String oAuthAccessToken (security) The user access token String oAuthAccessTokenURL (security) OAuth access token URL https://graph.facebook.com/oauth/access_token String oAuthAppId (security) The application Id String oAuthAppSecret (security) The application Secret String oAuthAuthorizationURL (security) OAuth authorization URL https://www.facebook.com/dialog/oauth String oAuthPermissions (security) Default OAuth permissions. Comma separated permission names. See https://developers.facebook.com/docs/reference/login/#permissions for the detail String 100.3. Spring Boot Auto-Configuration The component supports 29 options, which are listed below. Name Description Default Type camel.component.facebook.configuration.client-u-r-l Facebook4J API client URL String camel.component.facebook.configuration.client-version Facebook4J client API version String camel.component.facebook.configuration.debug-enabled Enables deubg output. Effective only with the embedded logger false Boolean camel.component.facebook.configuration.gzip-enabled Use Facebook GZIP encoding true Boolean camel.component.facebook.configuration.http-connection-timeout Http connection timeout in milliseconds 20000 Integer camel.component.facebook.configuration.http-default-max-per-route HTTP maximum connections per route 2 Integer camel.component.facebook.configuration.http-max-total-connections HTTP maximum total connections 20 Integer camel.component.facebook.configuration.http-proxy-host HTTP proxy server host name String camel.component.facebook.configuration.http-proxy-password HTTP proxy server password String camel.component.facebook.configuration.http-proxy-port HTTP proxy server port Integer camel.component.facebook.configuration.http-proxy-user HTTP proxy server user name String camel.component.facebook.configuration.http-read-timeout Http read timeout in milliseconds 120000 Integer camel.component.facebook.configuration.http-retry-count Number of HTTP retries 0 Integer camel.component.facebook.configuration.http-retry-interval-seconds HTTP retry interval in seconds 5 Integer camel.component.facebook.configuration.http-streaming-read-timeout HTTP streaming read timeout in milliseconds 40000 Integer camel.component.facebook.configuration.json-store-enabled If set to true, raw JSON forms will be stored in DataObjectFactory false Boolean camel.component.facebook.configuration.mbean-enabled If set to true, Facebook4J mbean will be registerd false Boolean camel.component.facebook.configuration.o-auth-access-token The user access token String camel.component.facebook.configuration.o-auth-access-token-u-r-l OAuth access token URL https://graph.facebook.com/oauth/access_token String camel.component.facebook.configuration.o-auth-app-id The application Id String camel.component.facebook.configuration.o-auth-app-secret The application Secret String camel.component.facebook.configuration.o-auth-authorization-u-r-l OAuth authorization URL https://www.facebook.com/dialog/oauth String camel.component.facebook.configuration.o-auth-permissions Default OAuth permissions. Comma separated permission names. See https://developers.facebook.com/docs/reference/login/#permissions for the detail String camel.component.facebook.configuration.pretty-debug-enabled Prettify JSON debug output if set to true false Boolean camel.component.facebook.configuration.rest-base-u-r-l API base URL https://graph.facebook.com/ String camel.component.facebook.configuration.use-s-s-l Use SSL true Boolean camel.component.facebook.configuration.video-base-u-r-l Video API base URL https://graph-video.facebook.com/ String camel.component.facebook.enabled Enable facebook component true Boolean camel.component.facebook.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 100.4. Producer Endpoints: Producer endpoints can use endpoint names and options from the table below. Endpoints can also use the short name without the get or search prefix, except checkin due to ambiguity between getCheckin and searchCheckin . Endpoint options that are not mandatory are denoted by []. Producer endpoints can also use a special option inBody that in turn should contain the name of the endpoint option whose value will be contained in the Camel Exchange In message. For example, the facebook endpoint in the following route retrieves activities for the user id value in the incoming message body. from("direct:test").to("facebook://activities?inBody=userId")... Any of the endpoint options can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelFacebook.https://cwiki.apache.org/confluence/pages/createpage.action?spaceKey=CAMEL&title=option&linkCreation=true&fromPageId=34020899[option] . For example, the userId option value in the route could alternately be provided in the message header CamelFacebook.userId . Note that the inBody option overrides message header, e.g. the endpoint option inBody=user would override a CamelFacebook.userId header. Endpoints that return a String return an Id for the created or modified entity, e.g. addAlbumPhoto returns the new album Id. Endpoints that return a boolean, return true for success and false otherwise. In case of Facebook API errors the endpoint will throw a RuntimeCamelException with a facebook4j.FacebookException cause. 100.5. Consumer Endpoints: Any of the producer endpoints that take a reading#reading parameter can be used as a consumer endpoint. The polling consumer uses the since and until fields to get responses within the polling interval. In addition to other reading fields, an initial since value can be provided in the endpoint for the first poll. Rather than the endpoints returning a List (or facebook4j.ResponseList ) through a single route exchange, camel-facebook creates one route exchange per returned object. As an example, if "facebook://home" results in five posts, the route will be executed five times (once for each Post). 100.6. Reading Options The reading option of type facebook4j.Reading adds support for reading parameters, which allow selecting specific fields, limits the number of results, etc. For more information see Graph API#reading - Facebook Developers . It is also used by consumer endpoints to poll Facebook data to avoid sending duplicate messages across polls. The reading option can be a reference or value of type facebook4j.Reading , or can be specified using the following reading options in either the endpoint URI or exchange header with CamelFacebook. prefix. 100.7. Message header Any of the URI options#urioptions can be provided in a message header for producer endpoints with CamelFacebook. prefix. 100.8. Message body All result message bodies utilize objects provided by the Facebook4J API. Producer endpoints can specify the option name for incoming message body in the inBody endpoint parameter. For endpoints that return an array, or facebook4j.ResponseList , or java.util.List , a consumer endpoint will map every elements in the list to distinct messages. 100.9. Use cases To create a post within your Facebook profile, send this producer a facebook4j.PostUpdate body. from("direct:foo") .to("facebook://postFeed/inBody=postUpdate); To poll, every 5 sec (You can set the polling consumer options by adding a prefix of "consumer"), all statuses on your home feed: from("facebook://home?consumer.delay=5000") .to("bean:blah"); Searching using a producer with dynamic options from header. In the bar header we have the Facebook search string we want to execute in public posts, so we need to assign this value to the CamelFacebook.query header. from("direct:foo") .setHeader("CamelFacebook.query", header("bar")) .to("facebook://posts"); | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-facebook</artifactId> <version>USD{camel-version}</version> </dependency>",
"facebook://[endpoint]?[options]",
"facebook:methodName",
"from(\"direct:test\").to(\"facebook://activities?inBody=userId\")",
"from(\"direct:foo\") .to(\"facebook://postFeed/inBody=postUpdate);",
"from(\"facebook://home?consumer.delay=5000\") .to(\"bean:blah\");",
"from(\"direct:foo\") .setHeader(\"CamelFacebook.query\", header(\"bar\")) .to(\"facebook://posts\");"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/facebook-component |
Chapter 6. Preparing an Agent-based installed cluster for the multicluster engine for Kubernetes Operator | Chapter 6. Preparing an Agent-based installed cluster for the multicluster engine for Kubernetes Operator You can install the multicluster engine Operator and deploy a hub cluster with the Agent-based OpenShift Container Platform Installer. The following procedure is partially automated and requires manual steps after the initial cluster is deployed. 6.1. Prerequisites You have read the following documentation: Cluster lifecycle with multicluster engine operator overview . Persistent storage using local volumes . Using ZTP to provision clusters at the network far edge . Preparing to install with the Agent-based Installer . About disconnected installation mirroring . You have access to the internet to obtain the necessary container images. You have installed the OpenShift CLI ( oc ). If you are installing in a disconnected environment, you must have a configured local mirror registry for disconnected installation mirroring. 6.2. Preparing an Agent-based cluster deployment for the multicluster engine for Kubernetes Operator while disconnected You can mirror the required OpenShift Container Platform container images, the multicluster engine Operator, and the Local Storage Operator (LSO) into your local mirror registry in a disconnected environment. Ensure that you note the local DNS hostname and port of your mirror registry. Note To mirror your OpenShift Container Platform image repository to your mirror registry, you can use either the oc adm release image or oc mirror command. In this procedure, the oc mirror command is used as an example. Procedure Create an <assets_directory> folder to contain valid install-config.yaml and agent-config.yaml files. This directory is used to store all the assets. To mirror an OpenShift Container Platform image repository, the multicluster engine, and the LSO, create a ImageSetConfiguration.yaml file with the following settings: Example ImageSetConfiguration.yaml kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 imageURL: <your-local-registry-dns-name>:<your-local-registry-port>/mirror/oc-mirror-metadata 3 skipTLS: true mirror: platform: architectures: - "amd64" channels: - name: stable-4.14 4 type: ocp additionalImages: - name: registry.redhat.io/ubi9/ubi:latest operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 5 packages: 6 - name: multicluster-engine 7 - name: local-storage-operator 8 1 Specify the maximum size, in GiB, of each file within the image set. 2 Set the back-end location to receive the image set metadata. This location can be a registry or local directory. It is required to specify storageConfig values. 3 Set the registry URL for the storage backend. 4 Set the channel that contains the OpenShift Container Platform images for the version you are installing. 5 Set the Operator catalog that contains the OpenShift Container Platform images that you are installing. 6 Specify only certain Operator packages and channels to include in the image set. Remove this field to retrieve all packages in the catalog. 7 The multicluster engine packages and channels. 8 The LSO packages and channels. Note This file is required by the oc mirror command when mirroring content. To mirror a specific OpenShift Container Platform image repository, the multicluster engine, and the LSO, run the following command: USD oc mirror --dest-skip-tls --config ocp-mce-imageset.yaml docker://<your-local-registry-dns-name>:<your-local-registry-port> Update the registry and certificate in the install-config.yaml file: Example imageContentSources.yaml imageContentSources: - source: "quay.io/openshift-release-dev/ocp-release" mirrors: - "<your-local-registry-dns-name>:<your-local-registry-port>/openshift/release-images" - source: "quay.io/openshift-release-dev/ocp-v4.0-art-dev" mirrors: - "<your-local-registry-dns-name>:<your-local-registry-port>/openshift/release" - source: "registry.redhat.io/ubi9" mirrors: - "<your-local-registry-dns-name>:<your-local-registry-port>/ubi9" - source: "registry.redhat.io/multicluster-engine" mirrors: - "<your-local-registry-dns-name>:<your-local-registry-port>/multicluster-engine" - source: "registry.redhat.io/rhel8" mirrors: - "<your-local-registry-dns-name>:<your-local-registry-port>/rhel8" - source: "registry.redhat.io/redhat" mirrors: - "<your-local-registry-dns-name>:<your-local-registry-port>/redhat" Additionally, ensure your certificate is present in the additionalTrustBundle field of the install-config.yaml . Example install-config.yaml additionalTrustBundle: | -----BEGIN CERTIFICATE----- zzzzzzzzzzz -----END CERTIFICATE------- Important The oc mirror command creates a folder called oc-mirror-workspace with several outputs. This includes the imageContentSourcePolicy.yaml file that identifies all the mirrors you need for OpenShift Container Platform and your selected Operators. Generate the cluster manifests by running the following command: USD openshift-install agent create cluster-manifests This command updates the cluster manifests folder to include a mirror folder that contains your mirror configuration. 6.3. Preparing an Agent-based cluster deployment for the multicluster engine for Kubernetes Operator while connected Create the required manifests for the multicluster engine Operator, the Local Storage Operator (LSO), and to deploy an agent-based OpenShift Container Platform cluster as a hub cluster. Procedure Create a sub-folder named openshift in the <assets_directory> folder. This sub-folder is used to store the extra manifests that will be applied during the installation to further customize the deployed cluster. The <assets_directory> folder contains all the assets including the install-config.yaml and agent-config.yaml files. Note The installer does not validate extra manifests. For the multicluster engine, create the following manifests and save them in the <assets_directory>/openshift folder: Example mce_namespace.yaml apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" name: multicluster-engine Example mce_operatorgroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: multicluster-engine-operatorgroup namespace: multicluster-engine spec: targetNamespaces: - multicluster-engine Example mce_subscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: multicluster-engine namespace: multicluster-engine spec: channel: "stable-2.3" name: multicluster-engine source: redhat-operators sourceNamespace: openshift-marketplace Note You can install a distributed unit (DU) at scale with the Red Hat Advanced Cluster Management (RHACM) using the assisted installer (AI). These distributed units must be enabled in the hub cluster. The AI service requires persistent volumes (PVs), which are manually created. For the AI service, create the following manifests and save them in the <assets_directory>/openshift folder: Example lso_namespace.yaml apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/cluster-monitoring: "true" name: openshift-local-storage Example lso_operatorgroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage Example lso_subscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: installPlanApproval: Automatic name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace Note After creating all the manifests, your filesystem must display as follows: Example Filesystem <assets_directory> ββ install-config.yaml ββ agent-config.yaml ββ /openshift ββ mce_namespace.yaml ββ mce_operatorgroup.yaml ββ mce_subscription.yaml ββ lso_namespace.yaml ββ lso_operatorgroup.yaml ββ lso_subscription.yaml Create the agent ISO image by running the following command: USD openshift-install agent create image --dir <assets_directory> When the image is ready, boot the target machine and wait for the installation to complete. To monitor the installation, run the following command: USD openshift-install agent wait-for install-complete --dir <assets_directory> Note To configure a fully functional hub cluster, you must create the following manifests and manually apply them by running the command USD oc apply -f <manifest-name> . The order of the manifest creation is important and where required, the waiting condition is displayed. For the PVs that are required by the AI service, create the following manifests: apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: assisted-service namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed storageClassDevices: - devicePaths: - /dev/vda - /dev/vdb storageClassName: assisted-service volumeMode: Filesystem Use the following command to wait for the availability of the PVs, before applying the subsequent manifests: USD oc wait localvolume -n openshift-local-storage assisted-service --for condition=Available --timeout 10m Note Create a manifest for a multicluster engine instance. Example MultiClusterEngine.yaml apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: {} Create a manifest to enable the AI service. Example agentserviceconfig.yaml apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent namespace: assisted-installer spec: databaseStorage: storageClassName: assisted-service accessModes: - ReadWriteOnce resources: requests: storage: 10Gi filesystemStorage: storageClassName: assisted-service accessModes: - ReadWriteOnce resources: requests: storage: 10Gi Create a manifest to deploy subsequently spoke clusters. Example clusterimageset.yaml apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: "4.14" spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.14.0-x86_64 Create a manifest to import the agent installed cluster (that hosts the multicluster engine and the Assisted Service) as the hub cluster. Example autoimport.yaml apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: labels: local-cluster: "true" cloud: auto-detect vendor: auto-detect name: local-cluster spec: hubAcceptsClient: true Wait for the managed cluster to be created. USD oc wait -n multicluster-engine managedclusters local-cluster --for condition=ManagedClusterJoined=True --timeout 10m Verification To confirm that the managed cluster installation is successful, run the following command: USD oc get managedcluster NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://<your cluster url>:6443 True True 77m Additional resources The Local Storage Operator | [
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 imageURL: <your-local-registry-dns-name>:<your-local-registry-port>/mirror/oc-mirror-metadata 3 skipTLS: true mirror: platform: architectures: - \"amd64\" channels: - name: stable-4.14 4 type: ocp additionalImages: - name: registry.redhat.io/ubi9/ubi:latest operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 5 packages: 6 - name: multicluster-engine 7 - name: local-storage-operator 8",
"oc mirror --dest-skip-tls --config ocp-mce-imageset.yaml docker://<your-local-registry-dns-name>:<your-local-registry-port>",
"imageContentSources: - source: \"quay.io/openshift-release-dev/ocp-release\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/openshift/release-images\" - source: \"quay.io/openshift-release-dev/ocp-v4.0-art-dev\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/openshift/release\" - source: \"registry.redhat.io/ubi9\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/ubi9\" - source: \"registry.redhat.io/multicluster-engine\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/multicluster-engine\" - source: \"registry.redhat.io/rhel8\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/rhel8\" - source: \"registry.redhat.io/redhat\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/redhat\"",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- zzzzzzzzzzz -----END CERTIFICATE-------",
"openshift-install agent create cluster-manifests",
"apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" name: multicluster-engine",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: multicluster-engine-operatorgroup namespace: multicluster-engine spec: targetNamespaces: - multicluster-engine",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: multicluster-engine namespace: multicluster-engine spec: channel: \"stable-2.3\" name: multicluster-engine source: redhat-operators sourceNamespace: openshift-marketplace",
"apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/cluster-monitoring: \"true\" name: openshift-local-storage",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: installPlanApproval: Automatic name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"<assets_directory> ββ install-config.yaml ββ agent-config.yaml ββ /openshift ββ mce_namespace.yaml ββ mce_operatorgroup.yaml ββ mce_subscription.yaml ββ lso_namespace.yaml ββ lso_operatorgroup.yaml ββ lso_subscription.yaml",
"openshift-install agent create image --dir <assets_directory>",
"openshift-install agent wait-for install-complete --dir <assets_directory>",
"apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: assisted-service namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed storageClassDevices: - devicePaths: - /dev/vda - /dev/vdb storageClassName: assisted-service volumeMode: Filesystem",
"oc wait localvolume -n openshift-local-storage assisted-service --for condition=Available --timeout 10m",
"The `devicePath` is an example and may vary depending on the actual hardware configuration used.",
"apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: {}",
"apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent namespace: assisted-installer spec: databaseStorage: storageClassName: assisted-service accessModes: - ReadWriteOnce resources: requests: storage: 10Gi filesystemStorage: storageClassName: assisted-service accessModes: - ReadWriteOnce resources: requests: storage: 10Gi",
"apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: \"4.14\" spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.14.0-x86_64",
"apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: labels: local-cluster: \"true\" cloud: auto-detect vendor: auto-detect name: local-cluster spec: hubAcceptsClient: true",
"oc wait -n multicluster-engine managedclusters local-cluster --for condition=ManagedClusterJoined=True --timeout 10m",
"oc get managedcluster NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://<your cluster url>:6443 True True 77m"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_an_on-premise_cluster_with_the_agent-based_installer/preparing-an-agent-based-installed-cluster-for-the-multicluster-engine-for-kubernetes |
3.8. Configuring Error Behavior | 3.8. Configuring Error Behavior When an error occurs during an I/O operation, the XFS driver responds in one of two ways: Continue retries until either: the I/O operation succeeds, or an I/O operation retry count or time limit is exceeded. Consider the error permanent and halt the system. XFS currently recognizes the following error conditions for which you can configure the desired behavior specifically: EIO : Error while trying to write to the device ENOSPC : No space left on the device ENODEV : Device cannot be found All other possible error conditions, which do not have specific handlers defined, share a single, global configuration. You can set the conditions under which XFS deems the errors permanent, both in the maximum number of retries and the maximum time in seconds. XFS stops retrying when any one of the conditions is met. There is also an option to immediately cancel the retries when unmounting the file system, regardless of any other configuration. This allows the unmount operation to succeed even in case of persistent errors. 3.8.1. Configuration Files for Specific and Undefined Conditions Configuration files controlling error behavior are located in the /sys/fs/xfs/device/error/ directory. The /sys/fs/xfs/ device /error/metadata/ directory contains subdirectories for each specific error condition: /sys/fs/xfs/ device /error/metadata/EIO/ for the EIO error condition /sys/fs/xfs/ device /error/metadata/ENODEV/ for the ENODEV error condition /sys/fs/xfs/ device /error/metadata/ENOSPC/ for the ENOSPC error condition Each one then contains the following configuration files: /sys/fs/xfs/ device /error/metadata/ condition /max_retries : controls the maximum number of times that XFS retries the operation. /sys/fs/xfs/ device /error/metadata/ condition /retry_timeout_seconds : the time limit in seconds after which XFS will stop retrying the operation All other possible error conditions, apart from those described in the section, share a common configuration in these files: /sys/fs/xfs/ device /error/metadata/default/max_retries : controls the maximum number of retries /sys/fs/xfs/ device /error/metadata/default/retry_timeout_seconds : controls the time limit for retrying 3.8.2. Setting File System Behavior for Specific and Undefined Conditions To set the maximum number of retries, write the desired number to the max_retries file. For specific conditions: For undefined conditions: value is a number between -1 and the maximum possible value of int , the C signed integer type. This is 2147483647 on 64-bit Linux. To set the time limit, write the desired number of seconds to the retry_timeout_seconds file. For specific conditions: For undefined conditions: value is a number between -1 and 86400 , which is the number of seconds in a day. In both the max_retries and retry_timeout_seconds options, -1 means to retry forever and 0 to stop immediately. device is the name of the device, as found in the /dev/ directory; for example, sda . Note The default behavior for a each error condition is dependent on the error context. Some errors, like ENODEV , are considered to be fatal and unrecoverable, regardless of the retry count, so their default value is 0 . 3.8.3. Setting Unmount Behavior If the fail_at_unmount option is set, the file system overrides all other error configurations during unmount, and immediately umnounts the file system without retrying the I/O operation. This allows the unmount operation to succeed even in case of persistent errors. To set the unmount behavior: value is either 1 or 0 : 1 means to cancel retrying immediately if an error is found. 0 means to respect the max_retries and retry_timeout_seconds options. device is the name of the device, as found in the /dev/ directory; for example, sda . Important The fail_at_unmount option has to be set as desired before attempting to unmount the file system. After an unmount operation has started, the configuration files and directories may be unavailable. | [
"echo value > /sys/fs/xfs/ device /error/metadata/ condition /max_retries",
"echo value > /sys/fs/xfs/ device /error/metadata/default/max_retries",
"echo value > /sys/fs/xfs/ device /error/metadata/ condition /retry_timeout_seconds",
"echo value > /sys/fs/xfs/ device /error/metadata/default/retry_timeout_seconds",
"echo value > /sys/fs/xfs/ device /error/fail_at_unmount"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/xfs-error-behavior |
probe::socket.close | probe::socket.close Name probe::socket.close - Close a socket Synopsis Values protocol Protocol value flags Socket flags value name Name of this probe state Socket state value type Socket type value family Protocol family value Context The requester (user process or kernel) Description Fires at the beginning of closing a socket. | [
"socket.close"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-socket-close |
7.106. ksh | 7.106. ksh 7.106.1. RHBA-2013:0430 - ksh bug fix and enhancement update Updated ksh packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. KSH-93 is the most recent version of the KornShell by David Korn of AT&T Bell Laboratories. KornShell is a shell programming language which is also compatible with sh, the original Bourne Shell. Bug Fixes BZ#827512 Originally, ksh buffered output of a subshell, flushing it when the subshell completed. This slowed certain processes that waited for a particular output, because they had to wait for the subshell to complete. Moreover, it made it difficult to determine the order of events. The new version of ksh flushes output of the subshell every time the subshell executes a new command. Thanks to this change, processes waiting for the subshell output receive their data after every subshell command and the order of events is preserved. BZ#846663 Previously, the sfprints() function was unsafe to be called during the shell initialization, which could corrupt the memory. Consequently, assigning a right-aligned variable to a smaller size could result in inappropriate output format. With this update, the sfprints() call is no longer used in the described scenario, which fixes the format of the output. BZ#846678 Due to a bug in the typeset command, when executed with the -Z option, output was being formatted to an incorrect width. As a result, exporting a right-aligned variable of a smaller size than the predefined field size caused it to not be prepended with the "0" character. A patch has been provided and the typeset command now works as expected in the aforementioned scenario. Enhancement BZ#869155 With this update, ksh has been enhanced to support logging of the shell output. Users of ksh are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/ksh |
4.7. Synchronizing Configuration Files | 4.7. Synchronizing Configuration Files After configuring the primary LVS router, there are several configuration files that must be copied to the backup LVS router before you start LVS. These files include: /etc/sysconfig/ha/lvs.cf - the configuration file for the LVS routers. /etc/sysctl - the configuration file that, among other things, turns on packet forwarding in the kernel. /etc/sysconfig/iptables - If you are using firewall marks, you should synchronize one of these files based on which network packet filter you are using. Important The /etc/sysctl.conf and /etc/sysconfig/iptables files do not change when you configure LVS using the Piranha Configuration Tool . 4.7.1. Synchronizing lvs.cf Anytime the LVS configuration file, /etc/sysconfig/ha/lvs.cf , is created or updated, you must copy it to the backup LVS router node. Warning Both the active and backup LVS router nodes must have identical lvs.cf files. Mismatched LVS configuration files between the LVS router nodes can prevent failover. The best way to do this is to use the scp command. Important To use scp the sshd must be running on the backup router, see Section 2.1, "Configuring Services on the LVS Routers" for details on how to properly configure the necessary services on the LVS routers. Issue the following command as the root user from the primary LVS router to sync the lvs.cf files between the router nodes: scp /etc/sysconfig/ha/lvs.cf n.n.n.n :/etc/sysconfig/ha/lvs.cf In the command, replace n.n.n.n with the real IP address of the backup LVS router. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s1-lvs-sync-VSA |
Building applications | Building applications OpenShift Container Platform 4.12 Creating and managing applications on OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"oc new-project <project_name> --description=\"<description>\" --display-name=\"<display_name>\"",
"oc new-project hello-openshift --description=\"This is an example project\" --display-name=\"Hello OpenShift\"",
"oc get projects",
"oc project <project_name>",
"apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: projectAccess: availableClusterRoles: - admin - edit - view",
"oc project <project_name> 1",
"oc status",
"oc delete project <project_name> 1",
"oc new-project <project> --as=<user> --as-group=system:authenticated --as-group=system:authenticated:oauth",
"oc adm create-bootstrap-project-template -o yaml > template.yaml",
"oc create -f template.yaml -n openshift-config",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>",
"oc describe clusterrolebinding.rbac self-provisioners",
"Name: self-provisioners Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate=true Role: Kind: ClusterRole Name: self-provisioner Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated:oauth",
"oc patch clusterrolebinding.rbac self-provisioners -p '{\"subjects\": null}'",
"oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth",
"oc edit clusterrolebinding.rbac self-provisioners",
"apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"false\"",
"oc patch clusterrolebinding.rbac self-provisioners -p '{ \"metadata\": { \"annotations\": { \"rbac.authorization.kubernetes.io/autoupdate\": \"false\" } } }'",
"oc new-project test",
"Error from server (Forbidden): You may not request a new project via this API.",
"You may not request a new project via this API.",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: <message_string>",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: To request a project, contact your system administrator at [email protected].",
"oc get csv",
"oc policy add-role-to-user edit <user> -n <target_project>",
"oc new-app /<path to source code>",
"oc new-app https://github.com/sclorg/cakephp-ex",
"oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret",
"oc new-app https://github.com/sclorg/s2i-ruby-container.git --context-dir=2.0/test/puma-test-app",
"oc new-app https://github.com/openshift/ruby-hello-world.git#beta4",
"oc new-app /home/user/code/myapp --strategy=docker",
"oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git",
"oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app",
"oc new-app mysql",
"oc new-app myregistry:5000/example/myimage",
"oc new-app my-stream:v1",
"oc create -f examples/sample-app/application-template-stibuild.json",
"oc new-app ruby-helloworld-sample",
"oc new-app -f examples/sample-app/application-template-stibuild.json",
"oc new-app ruby-helloworld-sample -p ADMIN_USERNAME=admin -p ADMIN_PASSWORD=mypassword",
"ADMIN_USERNAME=admin ADMIN_PASSWORD=mypassword",
"oc new-app ruby-helloworld-sample --param-file=helloworld.params",
"oc new-app openshift/postgresql-92-centos7 -e POSTGRESQL_USER=user -e POSTGRESQL_DATABASE=db -e POSTGRESQL_PASSWORD=password",
"POSTGRESQL_USER=user POSTGRESQL_DATABASE=db POSTGRESQL_PASSWORD=password",
"oc new-app openshift/postgresql-92-centos7 --env-file=postgresql.env",
"cat postgresql.env | oc new-app openshift/postgresql-92-centos7 --env-file=-",
"oc new-app openshift/ruby-23-centos7 --build-env HTTP_PROXY=http://myproxy.net:1337/ --build-env GEM_HOME=~/.gem",
"HTTP_PROXY=http://myproxy.net:1337/ GEM_HOME=~/.gem",
"oc new-app openshift/ruby-23-centos7 --build-env-file=ruby.env",
"cat ruby.env | oc new-app openshift/ruby-23-centos7 --build-env-file=-",
"oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world",
"oc new-app https://github.com/openshift/ruby-hello-world -o yaml > myapp.yaml",
"vi myapp.yaml",
"oc create -f myapp.yaml",
"oc new-app https://github.com/openshift/ruby-hello-world --name=myapp",
"oc new-app https://github.com/openshift/ruby-hello-world -n myproject",
"oc new-app https://github.com/openshift/ruby-hello-world mysql",
"oc new-app ruby+mysql",
"oc new-app ruby~https://github.com/openshift/ruby-hello-world mysql --group=ruby+mysql",
"oc new-app --search php",
"`postgresclusters.postgres-operator.crunchydata.com \"hippo\" is forbidden: User \"system:serviceaccount:my-petclinic:service-binding-operator\" cannot get resource \"postgresclusters\" in API group \"postgres-operator.crunchydata.com\" in the namespace \"my-petclinic\"`",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role",
"`postgresclusters.postgres-operator.crunchydata.com \"hippo\" is forbidden: User \"system:serviceaccount:my-petclinic:service-binding-operator\" cannot get resource \"postgresclusters\" in API group \"postgres-operator.crunchydata.com\" in the namespace \"my-petclinic\"`",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role",
"`postgresclusters.postgres-operator.crunchydata.com \"hippo\" is forbidden: User \"system:serviceaccount:my-petclinic:service-binding-operator\" cannot get resource \"postgresclusters\" in API group \"postgres-operator.crunchydata.com\" in the namespace \"my-petclinic\"`",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role",
"`postgresclusters.postgres-operator.crunchydata.com \"hippo\" is forbidden: User \"system:serviceaccount:my-petclinic:service-binding-operator\" cannot get resource \"postgresclusters\" in API group \"postgres-operator.crunchydata.com\" in the namespace \"my-petclinic\"`",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: service-binding-crunchy-postgres-viewer subjects: - kind: ServiceAccount name: service-binding-operator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: service-binding-crunchy-postgres-viewer-role",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo spec: image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:ubi8-14.4-0 postgresVersion: 14 instances: - name: instance1 dataVolumeClaimSpec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: 1Gi backups: pgbackrest: image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:ubi8-2.38-0 repos: - name: repo1 volume: volumeClaimSpec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: 1Gi EOD",
"postgrescluster.postgres-operator.crunchydata.com/hippo created",
"oc get pods -n my-petclinic",
"NAME READY STATUS RESTARTS AGE hippo-backup-9rxm-88rzq 0/1 Completed 0 2m2s hippo-instance1-6psd-0 4/4 Running 0 3m28s hippo-repo-host-0 2/2 Running 0 3m28s",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: apps/v1 kind: Deployment metadata: name: spring-petclinic labels: app: spring-petclinic spec: replicas: 1 selector: matchLabels: app: spring-petclinic template: metadata: labels: app: spring-petclinic spec: containers: - name: app image: quay.io/service-binding/spring-petclinic:latest imagePullPolicy: Always env: - name: SPRING_PROFILES_ACTIVE value: postgres ports: - name: http containerPort: 8080 --- apiVersion: v1 kind: Service metadata: labels: app: spring-petclinic name: spring-petclinic spec: type: NodePort ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: spring-petclinic EOD",
"deployment.apps/spring-petclinic created service/spring-petclinic created",
"oc get pods -n my-petclinic",
"NAME READY STATUS RESTARTS AGE spring-petclinic-5b4c7999d4-wzdtz 0/1 CrashLoopBackOff 4 (13s ago) 2m25s",
"oc expose service spring-petclinic -n my-petclinic",
"route.route.openshift.io/spring-petclinic exposed",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: 1 - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster 2 name: hippo application: 3 name: spring-petclinic group: apps version: v1 resource: deployments EOD",
"servicebinding.binding.operators.coreos.com/spring-petclinic created",
"oc get servicebindings -n my-petclinic",
"NAME READY REASON AGE spring-petclinic-pgcluster True ApplicationsBound 7s",
"for i in username password host port type; do oc exec -it deploy/spring-petclinic -n my-petclinic -- /bin/bash -c 'cd /tmp; find /bindings/*/'USDi' -exec echo -n {}:\" \" \\; -exec cat {} \\;'; echo; done",
"/bindings/spring-petclinic-pgcluster/username: <username> /bindings/spring-petclinic-pgcluster/password: <password> /bindings/spring-petclinic-pgcluster/host: hippo-primary.my-petclinic.svc /bindings/spring-petclinic-pgcluster/port: 5432 /bindings/spring-petclinic-pgcluster/type: postgresql",
"oc port-forward --address 0.0.0.0 svc/spring-petclinic 8080:80 -n my-petclinic",
"Forwarding from 0.0.0.0:8080 -> 8080 Handling connection for 8080",
"oc apply -f - << EOD --- apiVersion: v1 kind: Namespace metadata: name: my-petclinic --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: postgres-operator-group namespace: my-petclinic --- apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: ibm-multiarch-catalog namespace: openshift-marketplace spec: sourceType: grpc image: quay.io/ibm/operator-registry-<architecture> 1 imagePullPolicy: IfNotPresent displayName: ibm-multiarch-catalog updateStrategy: registryPoll: interval: 30m --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: postgresql-operator-dev4devs-com namespace: openshift-operators spec: channel: alpha installPlanApproval: Automatic name: postgresql-operator-dev4devs-com source: ibm-multiarch-catalog sourceNamespace: openshift-marketplace --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: database-view labels: servicebinding.io/controller: \"true\" rules: - apiGroups: - postgresql.dev4devs.com resources: - databases verbs: - get - list EOD",
"oc get subs -n openshift-operators",
"NAME PACKAGE SOURCE CHANNEL postgresql-operator-dev4devs-com postgresql-operator-dev4devs-com ibm-multiarch-catalog alpha rh-service-binding-operator rh-service-binding-operator redhat-operators stable",
"oc apply -f - << EOD apiVersion: postgresql.dev4devs.com/v1alpha1 kind: Database metadata: name: sampledatabase namespace: my-petclinic annotations: host: sampledatabase type: postgresql port: \"5432\" service.binding/database: 'path={.spec.databaseName}' service.binding/port: 'path={.metadata.annotations.port}' service.binding/password: 'path={.spec.databasePassword}' service.binding/username: 'path={.spec.databaseUser}' service.binding/type: 'path={.metadata.annotations.type}' service.binding/host: 'path={.metadata.annotations.host}' spec: databaseCpu: 30m databaseCpuLimit: 60m databaseMemoryLimit: 512Mi databaseMemoryRequest: 128Mi databaseName: \"sampledb\" databaseNameKeyEnvVar: POSTGRESQL_DATABASE databasePassword: \"samplepwd\" databasePasswordKeyEnvVar: POSTGRESQL_PASSWORD databaseStorageRequest: 1Gi databaseUser: \"sampleuser\" databaseUserKeyEnvVar: POSTGRESQL_USER image: registry.redhat.io/rhel8/postgresql-13:latest databaseStorageClassName: nfs-storage-provisioner size: 1 EOD",
"database.postgresql.dev4devs.com/sampledatabase created",
"oc get pods -n my-petclinic",
"NAME READY STATUS RESTARTS AGE sampledatabase-cbc655488-74kss 0/1 Running 0 32s",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: apps/v1 kind: Deployment metadata: name: spring-petclinic labels: app: spring-petclinic spec: replicas: 1 selector: matchLabels: app: spring-petclinic template: metadata: labels: app: spring-petclinic spec: containers: - name: app image: quay.io/service-binding/spring-petclinic:latest imagePullPolicy: Always env: - name: SPRING_PROFILES_ACTIVE value: postgres - name: org.springframework.cloud.bindings.boot.enable value: \"true\" ports: - name: http containerPort: 8080 --- apiVersion: v1 kind: Service metadata: labels: app: spring-petclinic name: spring-petclinic spec: type: NodePort ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: spring-petclinic EOD",
"deployment.apps/spring-petclinic created service/spring-petclinic created",
"oc get pods -n my-petclinic",
"NAME READY STATUS RESTARTS AGE spring-petclinic-5b4c7999d4-wzdtz 0/1 CrashLoopBackOff 4 (13s ago) 2m25s",
"oc apply -n my-petclinic -f - << EOD --- apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: 1 - group: postgresql.dev4devs.com kind: Database 2 name: sampledatabase version: v1alpha1 application: 3 name: spring-petclinic group: apps version: v1 resource: deployments EOD",
"servicebinding.binding.operators.coreos.com/spring-petclinic created",
"oc get servicebindings -n my-petclinic",
"NAME READY REASON AGE spring-petclinic-postgresql True ApplicationsBound 47m",
"oc port-forward --address 0.0.0.0 svc/spring-petclinic 8080:80 -n my-petclinic",
"Forwarding from 0.0.0.0:8080 -> 8080 Handling connection for 8080",
"apiVersion: example.com/v1alpha1 kind: AccountService name: prod-account-service spec: status: binding: name: hippo-pguser-hippo",
"apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: \"<password>\" user: \"<username>\"",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: account-service spec: services: - group: \"example.com\" version: v1alpha1 kind: AccountService name: prod-account-service application: name: spring-petclinic group: apps version: v1 resource: deployments",
"apiVersion: servicebinding.io/v1beta1 kind: ServiceBinding metadata: name: account-service spec: service: apiVersion: example.com/v1alpha1 kind: AccountService name: prod-account-service workload: apiVersion: apps/v1 kind: Deployment name: spring-petclinic",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: account-service spec: services: - group: \"\" version: v1 kind: Secret name: hippo-pguser-hippo",
"apiVersion: servicebinding.io/v1beta1 kind: ServiceBinding metadata: name: account-service spec: service: apiVersion: v1 kind: Secret name: hippo-pguser-hippo",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-pguser-{.metadata.name},objectType=Secret'",
"apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: \"<password>\" user: \"<username>\"",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-config,objectType=ConfigMap'",
"apiVersion: v1 kind: ConfigMap metadata: name: hippo-config data: db_timeout: \"10s\" user: \"hippo\"",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-detect-all namespace: my-petclinic spec: detectBindingResources: true services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: name: spring-petclinic group: apps version: v1 resource: deployments",
"service.binding(/<NAME>)?: \"<VALUE>|(path=<JSONPATH_TEMPLATE>(,objectType=<OBJECT_TYPE>)?(,elementType=<ELEMENT_TYPE>)?(,sourceKey=<SOURCE_KEY>)?(,sourceValue=<SOURCE_VALUE>)?)\"",
"apiVersion: apps.example.org/v1beta1 kind: Database metadata: name: my-db namespace: my-petclinic annotations: service.binding/username: path={.spec.name},optional=true",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: postgrescluster-reader labels: servicebinding.io/controller: \"true\" rules: - apiGroups: - postgres-operator.crunchydata.com resources: - postgresclusters verbs: - get - watch - list",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding/username: path={.metadata.name}",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: \"service.binding/type\": \"postgresql\" 1",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-pguser-{.metadata.name},objectType=Secret'",
"apiVersion: v1 kind: Secret metadata: name: hippo-pguser-hippo data: password: \"<password>\" user: \"<username>\"",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding: 'path={.metadata.name}-config,objectType=ConfigMap,sourceKey=user'",
"apiVersion: v1 kind: ConfigMap metadata: name: hippo-config data: db_timeout: \"10s\" user: \"hippo\"",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: service.binding/username: path={.metadata.name}",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: \"service.binding/uri\": \"path={.status.connections},elementType=sliceOfMaps,sourceKey=type,sourceValue=url\" spec: status: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com",
"/bindings/<binding-name>/uri_primary => primary.example.com /bindings/<binding-name>/uri_secondary => secondary.example.com /bindings/<binding-name>/uri_404 => black-hole.example.com",
"status: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: \"service.binding/tags\": \"path={.spec.tags},elementType=sliceOfStrings\" spec: tags: - knowledge - is - power",
"/bindings/<binding-name>/tags_0 => knowledge /bindings/<binding-name>/tags_1 => is /bindings/<binding-name>/tags_2 => power",
"spec: tags: - knowledge - is - power",
"apiVersion: postgres-operator.crunchydata.com/v1beta1 kind: PostgresCluster metadata: name: hippo namespace: my-petclinic annotations: \"service.binding/url\": \"path={.spec.connections},elementType=sliceOfStrings,sourceValue=url\" spec: connections: - type: primary url: primary.example.com - type: secondary url: secondary.example.com - type: '404' url: black-hole.example.com",
"/bindings/<binding-name>/url_0 => primary.example.com /bindings/<binding-name>/url_1 => secondary.example.com /bindings/<binding-name>/url_2 => black-hole.example.com",
"USDSERVICE_BINDING_ROOT 1 βββ account-database 2 β βββ type 3 β βββ provider 4 β βββ uri β βββ username β βββ password βββ transaction-event-stream 5 βββ type βββ connection-count βββ uri βββ certificates βββ private-key",
"import os username = os.getenv(\"USERNAME\") password = os.getenv(\"PASSWORD\")",
"from pyservicebinding import binding try: sb = binding.ServiceBinding() except binding.ServiceBindingRootMissingError as msg: # log the error message and retry/exit print(\"SERVICE_BINDING_ROOT env var not set\") sb = binding.ServiceBinding() bindings_list = sb.bindings(\"postgresql\")",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: 1 - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: 2 name: spring-petclinic group: apps version: v1 resource: deployments",
"host: hippo-pgbouncer port: 5432",
"DATABASE_HOST: hippo-pgbouncer DATABASE_PORT: 5432",
"application: name: spring-petclinic group: apps version: v1 resource: deployments",
"services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo",
"DATABASE_HOST: hippo-pgbouncer",
"POSTGRESQL_DATABASE_HOST_ENV: hippo-pgbouncer POSTGRESQL_DATABASE_PORT_ENV: 5432",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo 1 id: postgresDB 2 - group: \"\" version: v1 kind: Secret name: hippo-pguser-hippo id: postgresSecret application: name: spring-petclinic group: apps version: v1 resource: deployments mappings: ## From the database service - name: JDBC_URL value: 'jdbc:postgresql://{{ .postgresDB.metadata.annotations.proxy }}:{{ .postgresDB.spec.port }}/{{ .postgresDB.metadata.name }}' ## From both the services! - name: CREDENTIALS value: '{{ .postgresDB.metadata.name }}{{ translationService.postgresSecret.data.password }}' ## Generate JSON - name: DB_JSON 3 value: {{ json .postgresDB.status }} 4",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: multi-application-binding namespace: service-binding-demo spec: application: labelSelector: 1 matchLabels: environment: production group: apps version: v1 resource: deployments services: group: \"\" version: v1 kind: Secret name: super-secret-data",
"apiVersion: servicebindings.io/v1beta1 kind: ServiceBinding metadata: name: multi-application-binding namespace: service-binding-demo spec: workload: selector: 1 matchLabels: environment: production apiVersion: app/v1 kind: Deployment service: apiVersion: v1 kind: Secret name: super-secret-data",
"apiVersion: \"operator.sbo.com/v1\" kind: SecondaryWorkload metadata: name: secondary-workload spec: containers: - name: hello-world image: quay.io/baijum/secondary-workload:latest ports: - containerPort: 8080",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo id: postgresDB - group: \"\" version: v1 kind: Secret name: hippo-pguser-hippo id: postgresSecret application: 1 name: spring-petclinic group: apps version: v1 resource: deployments application: 2 name: secondary-workload group: operator.sbo.com version: v1 resource: secondaryworkloads bindingPath: containersPath: spec.containers 3",
"apiVersion: \"operator.sbo.com/v1\" kind: SecondaryWorkload metadata: name: secondary-workload spec: containers: - env: 1 - name: ServiceBindingOperatorChangeTriggerEnvVar value: \"31793\" envFrom: - secretRef: name: secret-resource-name 2 image: quay.io/baijum/secondary-workload:latest name: hello-world ports: - containerPort: 8080 resources: {}",
"apiVersion: \"operator.sbo.com/v1\" kind: SecondaryWorkload metadata: name: secondary-workload spec: secret: \"\"",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster spec: application: 1 name: secondary-workload group: operator.sbo.com version: v1 resource: secondaryworkloads bindingPath: secretPath: spec.secret 2",
"apiVersion: \"operator.sbo.com/v1\" kind: SecondaryWorkload metadata: name: secondary-workload spec: secret: binding-request-72ddc0c540ab3a290e138726940591debf14c581 1",
"apiVersion: servicebinding.io/v1beta1 kind: ClusterWorkloadResourceMapping metadata: name: cronjobs.batch 1 spec: versions: - version: \"v1\" 2 annotations: .spec.jobTemplate.spec.template.metadata.annotations 3 containers: - path: .spec.jobTemplate.spec.template.spec.containers[*] 4 - path: .spec.jobTemplate.spec.template.spec.initContainers[*] name: .name 5 env: .env 6 volumeMounts: .volumeMounts 7 volumes: .spec.jobTemplate.spec.template.spec.volumes 8",
"oc delete ServiceBinding <.metadata.name>",
"oc delete ServiceBinding spring-petclinic-pgcluster",
"apiVersion: binding.operators.coreos.com/v1alpha1 kind: ServiceBinding metadata: name: spring-petclinic-pgcluster namespace: my-petclinic spec: services: - group: postgres-operator.crunchydata.com version: v1beta1 kind: PostgresCluster name: hippo application: name: spring-petclinic group: apps version: v1 resource: deployments",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-s390x -o /usr/local/bin/helm",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-ppc64le -o /usr/local/bin/helm",
"chmod +x /usr/local/bin/helm",
"helm version",
"version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-darwin-amd64 -o /usr/local/bin/helm",
"chmod +x /usr/local/bin/helm",
"helm version",
"version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}",
"oc new-project vault",
"helm repo add openshift-helm-charts https://charts.openshift.io/",
"\"openshift-helm-charts\" has been added to your repositories",
"helm repo update",
"helm install example-vault openshift-helm-charts/hashicorp-vault",
"NAME: example-vault LAST DEPLOYED: Fri Mar 11 12:02:12 2022 NAMESPACE: vault STATUS: deployed REVISION: 1 NOTES: Thank you for installing HashiCorp Vault!",
"helm list",
"NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION example-vault vault 1 2022-03-11 12:02:12.296226673 +0530 IST deployed vault-0.19.0 1.9.2",
"oc new-project nodejs-ex-k",
"git clone https://github.com/redhat-developer/redhat-helm-charts",
"cd redhat-helm-charts/alpha/nodejs-ex-k/",
"apiVersion: v2 1 name: nodejs-ex-k 2 description: A Helm chart for OpenShift 3 icon: https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.svg 4 version: 0.2.1 5",
"helm lint",
"[INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed",
"cd ..",
"helm install nodejs-chart nodejs-ex-k",
"helm list",
"NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nodejs-chart nodejs-ex-k 1 2019-12-05 15:06:51.379134163 -0500 EST deployed nodejs-0.1.0 1.16.0",
"apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <name> spec: # optional name that might be used by console # name: <chart-display-name> connectionConfig: url: <helm-chart-repository-url>",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF",
"apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: <name> spec: url: https://my.chart-repo.org/stable # optional name that might be used by console name: <chart-repo-display-name> # optional and only needed for UI purposes description: <My private chart repo> # required: chart repository URL connectionConfig: url: <helm-chart-repository-url>",
"cat <<EOF | oc apply --namespace my-namespace -f - apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF",
"projecthelmchartrepository.helm.openshift.io/azure-sample-repo created",
"oc get projecthelmchartrepositories --namespace my-namespace",
"NAME AGE azure-sample-repo 1m",
"oc create configmap helm-ca-cert --from-file=ca-bundle.crt=/path/to/certs/ca.crt -n openshift-config",
"oc create secret tls helm-tls-configs --cert=/path/to/certs/client.crt --key=/path/to/certs/client.key -n openshift-config",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <helm-repository> spec: name: <helm-repository> connectionConfig: url: <URL for the Helm repository> tlsConfig: name: helm-tls-configs ca: name: helm-ca-cert EOF",
"cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer rules: - apiGroups: [\"\"] resources: [\"configmaps\"] resourceNames: [\"helm-ca-cert\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"secrets\"] resourceNames: [\"helm-tls-configs\"] verbs: [\"get\"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: 'system:authenticated' roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: helm-chartrepos-tls-conf-viewer EOF",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: connectionConfig: url:https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs disabled: true EOF",
"spec: connectionConfig: url: <url-of-the-repositoru-to-be-disabled> disabled: true",
"apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend-1 labels: tier: frontend spec: replicas: 3 selector: 1 matchLabels: 2 tier: frontend matchExpressions: 3 - {key: tier, operator: In, values: [frontend]} template: metadata: labels: tier: frontend spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always",
"apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always",
"apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: openshift/hello-openshift:latest ports: - containerPort: 80",
"apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: frontend spec: replicas: 5 selector: name: frontend template: { ... } triggers: - type: ConfigChange 1 - imageChangeParams: automatic: true containerNames: - helloworld from: kind: ImageStreamTag name: hello-openshift:latest type: ImageChange 2 strategy: type: Rolling 3",
"oc rollout pause deployments/<name>",
"oc rollout latest dc/<name>",
"oc rollout history dc/<name>",
"oc rollout history dc/<name> --revision=1",
"oc describe dc <name>",
"oc rollout retry dc/<name>",
"oc rollout undo dc/<name>",
"oc set triggers dc/<name> --auto",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: template: spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>'",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: template: spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar",
"oc logs -f dc/<name>",
"oc logs --version=1 dc/<name>",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: triggers: - type: \"ConfigChange\"",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: triggers: - type: \"ImageChange\" imageChangeParams: automatic: true 1 from: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" namespace: \"myproject\" containerNames: - \"helloworld\"",
"oc set triggers dc/<dc_name> --from-image=<project>/<image>:<tag> -c <container_name>",
"kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: type: \"Recreate\" resources: limits: cpu: \"100m\" 1 memory: \"256Mi\" 2 ephemeral-storage: \"1Gi\" 3",
"kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: type: \"Recreate\" resources: requests: 1 cpu: \"100m\" memory: \"256Mi\" ephemeral-storage: \"1Gi\"",
"oc scale dc frontend --replicas=3",
"apiVersion: v1 kind: Pod metadata: name: my-pod spec: nodeSelector: disktype: ssd",
"oc edit dc/<deployment_config>",
"apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: example-dc spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account>",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Rolling rollingParams: updatePeriodSeconds: 1 1 intervalSeconds: 1 2 timeoutSeconds: 120 3 maxSurge: \"20%\" 4 maxUnavailable: \"10%\" 5 pre: {} 6 post: {}",
"oc new-app quay.io/openshifttest/deployment-example:latest",
"oc expose svc/deployment-example",
"oc scale dc/deployment-example --replicas=3",
"oc tag deployment-example:v2 deployment-example:latest",
"oc describe dc deployment-example",
"kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: strategy: type: Recreate recreateParams: 1 pre: {} 2 mid: {} post: {}",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Custom customParams: image: organization/strategy command: [ \"command\", \"arg1\" ] environment: - name: ENV_1 value: VALUE_1",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Rolling customParams: command: - /bin/sh - -c - | set -e openshift-deploy --until=50% echo Halfway there openshift-deploy echo Complete",
"Started deployment #2 --> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-2 up to 1 --> Reached 50% (currently 50%) Halfway there --> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-1 down to 1 Scaling custom-deployment-2 up to 2 Scaling custom-deployment-1 down to 0 --> Success Complete",
"pre: failurePolicy: Abort execNewPod: {} 1",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: template: metadata: labels: name: frontend spec: containers: - name: helloworld image: openshift/origin-ruby-sample replicas: 5 selector: name: frontend strategy: type: Rolling rollingParams: pre: failurePolicy: Abort execNewPod: containerName: helloworld 1 command: [ \"/usr/bin/command\", \"arg1\", \"arg2\" ] 2 env: 3 - name: CUSTOM_VAR1 value: custom_value1 volumes: - data 4",
"oc set deployment-hook dc/frontend --pre -c helloworld -e CUSTOM_VAR1=custom_value1 --volumes data --failure-policy=abort -- /usr/bin/command arg1 arg2",
"oc new-app openshift/deployment-example:v1 --name=example-blue",
"oc new-app openshift/deployment-example:v2 --name=example-green",
"oc expose svc/example-blue --name=bluegreen-example",
"oc patch route/bluegreen-example -p '{\"spec\":{\"to\":{\"name\":\"example-green\"}}}'",
"oc new-app openshift/deployment-example --name=ab-example-a",
"oc new-app openshift/deployment-example:v2 --name=ab-example-b",
"oc expose svc/ab-example-a",
"oc edit route <route_name>",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-alternate-service annotations: haproxy.router.openshift.io/balance: roundrobin spec: host: ab-example.my-project.my-domain to: kind: Service name: ab-example-a weight: 10 alternateBackends: - kind: Service name: ab-example-b weight: 15",
"oc set route-backends ROUTENAME [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options]",
"oc set route-backends ab-example ab-example-a=198 ab-example-b=2",
"oc set route-backends ab-example",
"NAME KIND TO WEIGHT routes/ab-example Service ab-example-a 198 (99%) routes/ab-example Service ab-example-b 2 (1%)",
"oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10",
"oc set route-backends ab-example --adjust ab-example-b=5%",
"oc set route-backends ab-example --adjust ab-example-b=+15%",
"oc set route-backends ab-example --equal",
"oc new-app openshift/deployment-example --name=ab-example-a --as-deployment-config=true --labels=ab-example=true --env=SUBTITLE\\=shardA",
"oc delete svc/ab-example-a",
"oc expose deployment ab-example-a --name=ab-example --selector=ab-example\\=true",
"oc expose service ab-example",
"oc new-app openshift/deployment-example:v2 --name=ab-example-b --labels=ab-example=true SUBTITLE=\"shard B\" COLOR=\"red\" --as-deployment-config=true",
"oc delete svc/ab-example-b",
"oc scale dc/ab-example-a --replicas=0",
"oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0",
"oc edit dc/ab-example-a",
"oc edit dc/ab-example-b",
"apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: \"10\" 1 persistentvolumeclaims: \"4\" 2 replicationcontrollers: \"20\" 3 secrets: \"10\" 4 services: \"10\" 5 services.loadbalancers: \"2\" 6",
"apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: \"10\" 1",
"apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: \"4\" 1 requests.cpu: \"1\" 2 requests.memory: 1Gi 3 limits.cpu: \"2\" 4 limits.memory: 2Gi 5",
"apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: \"1\" 1 scopes: - BestEffort 2",
"apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: \"4\" 1 limits.cpu: \"4\" 2 limits.memory: \"2Gi\" 3 scopes: - NotTerminating 4",
"apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: \"2\" 1 limits.cpu: \"1\" 2 limits.memory: \"1Gi\" 3 scopes: - Terminating 4",
"apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7 requests.ephemeral-storage: 2Gi 8 limits.ephemeral-storage: 4Gi 9",
"oc create -f <file> [-n <project_name>]",
"oc create -f core-object-counts.yaml -n demoproject",
"oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota> 1",
"oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4",
"resourcequota \"test\" created",
"oc describe quota test",
"Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4",
"oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu'",
"openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu 0 0",
"apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1",
"oc create -f gpu-quota.yaml",
"resourcequota/gpu-quota created",
"oc describe quota gpu-quota -n nvidia",
"Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1",
"apiVersion: v1 kind: Pod metadata: generateName: gpu-pod- namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: \"compute,utility\" - name: NVIDIA_REQUIRE_CUDA value: \"cuda>=5.0\" command: [\"sleep\"] args: [\"infinity\"] resources: limits: nvidia.com/gpu: 1",
"oc create -f gpu-pod.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m",
"oc describe quota gpu-quota -n nvidia",
"Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1",
"oc create -f gpu-pod.yaml",
"Error from server (Forbidden): error when creating \"gpu-pod.yaml\": pods \"gpu-pod-f7z2w\" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1",
"oc get quota -n demoproject",
"NAME AGE REQUEST LIMIT besteffort 4s pods: 1/2 compute-resources-time-bound 10m pods: 0/2 limits.cpu: 0/1, limits.memory: 0/1Gi core-object-counts 109s configmaps: 2/10, persistentvolumeclaims: 1/4, replicationcontrollers: 1/20, secrets: 9/10, services: 2/10",
"oc describe quota core-object-counts -n demoproject",
"Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10",
"oc adm create-bootstrap-project-template -o yaml > template.yaml",
"- apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption namespace: USD{PROJECT_NAME} spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7",
"oc create -f template.yaml -n openshift-config",
"oc get templates -n openshift-config",
"oc edit template <project_request_template> -n openshift-config",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: project-request",
"oc new-project <project_name>",
"oc get resourcequotas",
"oc describe resourcequotas <resource_quota_name>",
"oc create clusterquota for-user --project-annotation-selector openshift.io/requester=<user_name> --hard pods=10 --hard secrets=20",
"apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: name: for-user spec: quota: 1 hard: pods: \"10\" secrets: \"20\" selector: annotations: 2 openshift.io/requester: <user_name> labels: null 3 status: namespaces: 4 - namespace: ns-one status: hard: pods: \"10\" secrets: \"20\" used: pods: \"1\" secrets: \"9\" total: 5 hard: pods: \"10\" secrets: \"20\" used: pods: \"1\" secrets: \"9\"",
"oc create clusterresourcequota for-name \\ 1 --project-label-selector=name=frontend \\ 2 --hard=pods=10 --hard=secrets=20",
"apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: creationTimestamp: null name: for-name spec: quota: hard: pods: \"10\" secrets: \"20\" selector: annotations: null labels: matchLabels: name: frontend",
"oc describe AppliedClusterResourceQuota",
"Name: for-user Namespace: <none> Created: 19 hours ago Labels: <none> Annotations: <none> Label Selector: <null> AnnotationSelector: map[openshift.io/requester:<user-name>] Resource Used Hard -------- ---- ---- pods 1 10 secrets 9 20",
"kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4",
"apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 restartPolicy: Never",
"SPECIAL_LEVEL_KEY=very log_level=INFO",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)\" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type restartPolicy: Never",
"very charm",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/special.how\" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never",
"very",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/path/to/special-key\" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never",
"very",
"apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 readinessProbe: 3 exec: 4 command: 5 - cat - /tmp/healthy",
"apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 httpGet: 4 scheme: HTTPS 5 path: /healthz port: 8080 6 httpHeaders: - name: X-Custom-Header value: Awesome startupProbe: 7 httpGet: 8 path: /healthz port: 8080 9 failureThreshold: 30 10 periodSeconds: 10 11",
"apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 exec: 4 command: 5 - /bin/bash - '-c' - timeout 60 /opt/eap/bin/livenessProbe.sh periodSeconds: 10 6 successThreshold: 1 7 failureThreshold: 3 8",
"kind: Deployment apiVersion: apps/v1 metadata: labels: test: health-check name: my-application spec: template: spec: containers: - resources: {} readinessProbe: 1 tcpSocket: port: 8080 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 terminationMessagePath: /dev/termination-log name: ruby-ex livenessProbe: 2 tcpSocket: port: 8080 initialDelaySeconds: 15 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3",
"apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: my-container 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 tcpSocket: 4 port: 8080 5 initialDelaySeconds: 15 6 periodSeconds: 20 7 timeoutSeconds: 10 8 readinessProbe: 9 httpGet: 10 host: my-host 11 scheme: HTTPS 12 path: /healthz port: 8080 13 startupProbe: 14 exec: 15 command: 16 - cat - /tmp/healthy failureThreshold: 30 17 periodSeconds: 20 18 timeoutSeconds: 10 19",
"oc create -f <file-name>.yaml",
"oc describe pod my-application",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9s default-scheduler Successfully assigned openshift-logging/liveness-exec to ip-10-0-143-40.ec2.internal Normal Pulling 2s kubelet, ip-10-0-143-40.ec2.internal pulling image \"registry.k8s.io/liveness\" Normal Pulled 1s kubelet, ip-10-0-143-40.ec2.internal Successfully pulled image \"registry.k8s.io/liveness\" Normal Created 1s kubelet, ip-10-0-143-40.ec2.internal Created container Normal Started 1s kubelet, ip-10-0-143-40.ec2.internal Started container",
"oc describe pod pod1",
". Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> Successfully assigned aaa/liveness-http to ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Normal AddedInterface 47s multus Add eth0 [10.129.2.11/23] Normal Pulled 46s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 773.406244ms Normal Pulled 28s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 233.328564ms Normal Created 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Created container liveness Normal Started 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Started container liveness Warning Unhealthy 10s (x6 over 34s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Liveness probe failed: HTTP probe failed with statuscode: 500 Normal Killing 10s (x2 over 28s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Container liveness failed liveness probe, will be restarted Normal Pulling 10s (x3 over 47s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Pulling image \"registry.k8s.io/liveness\" Normal Pulled 10s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 244.116568ms",
"oc adm prune <object_type> <options>",
"oc adm prune groups --sync-config=path/to/sync/config [<options>]",
"oc adm prune groups --sync-config=ldap-sync-config.yaml",
"oc adm prune groups --sync-config=ldap-sync-config.yaml --confirm",
"oc adm prune deployments [<options>]",
"oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m",
"oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m --confirm",
"oc adm prune builds [<options>]",
"oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m",
"oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m --confirm",
"spec: schedule: 0 0 * * * 1 suspend: false 2 keepTagRevisions: 3 3 keepYoungerThanDuration: 60m 4 keepYoungerThan: 3600000000000 5 resources: {} 6 affinity: {} 7 nodeSelector: {} 8 tolerations: [] 9 successfulJobsHistoryLimit: 3 10 failedJobsHistoryLimit: 3 11 status: observedGeneration: 2 12 conditions: 13 - type: Available status: \"True\" lastTransitionTime: 2019-10-09T03:13:45 reason: Ready message: \"Periodic image pruner has been created.\" - type: Scheduled status: \"True\" lastTransitionTime: 2019-10-09T03:13:45 reason: Scheduled message: \"Image pruner job has been scheduled.\" - type: Failed staus: \"False\" lastTransitionTime: 2019-10-09T03:13:45 reason: Succeeded message: \"Most recent image pruning job succeeded.\"",
"oc create -f <filename>.yaml",
"kind: List apiVersion: v1 items: - apiVersion: v1 kind: ServiceAccount metadata: name: pruner namespace: openshift-image-registry - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: openshift-image-registry-pruner roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:image-pruner subjects: - kind: ServiceAccount name: pruner namespace: openshift-image-registry - apiVersion: batch/v1 kind: CronJob metadata: name: image-pruner namespace: openshift-image-registry spec: schedule: \"0 0 * * *\" concurrencyPolicy: Forbid successfulJobsHistoryLimit: 1 failedJobsHistoryLimit: 3 jobTemplate: spec: template: spec: restartPolicy: OnFailure containers: - image: \"quay.io/openshift/origin-cli:4.1\" resources: requests: cpu: 1 memory: 1Gi terminationMessagePolicy: FallbackToLogsOnError command: - oc args: - adm - prune - images - --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt - --keep-tag-revisions=5 - --keep-younger-than=96h - --confirm=true name: image-pruner serviceAccountName: pruner",
"oc adm prune images [<options>]",
"oc rollout restart deployment/image-registry -n openshift-image-registry",
"oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m",
"oc adm prune images --prune-over-size-limit",
"oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm",
"oc adm prune images --prune-over-size-limit --confirm",
"oc get is -n <namespace> -o go-template='{{range USDisi, USDis := .items}}{{range USDti, USDtag := USDis.status.tags}}' '{{range USDii, USDitem := USDtag.items}}{{if eq USDitem.image \"sha256:<hash>\"}}{{USDis.metadata.name}}:{{USDtag.tag}} at position {{USDii}} out of {{len USDtag.items}}\\n' '{{end}}{{end}}{{end}}{{end}}'",
"myapp:v2 at position 4 out of 5 myapp:v2.1 at position 2 out of 2 myapp:v2.1-may-2016 at position 0 out of 1",
"error: error communicating with registry: Get https://172.30.30.30:5000/healthz: http: server gave HTTP response to HTTPS client",
"error: error communicating with registry: Get http://172.30.30.30:5000/healthz: malformed HTTP response \"\\x15\\x03\\x01\\x00\\x02\\x02\" error: error communicating with registry: [Get https://172.30.30.30:5000/healthz: x509: certificate signed by unknown authority, Get http://172.30.30.30:5000/healthz: malformed HTTP response \"\\x15\\x03\\x01\\x00\\x02\\x02\"]",
"error: error communicating with registry: Get https://172.30.30.30:5000/: x509: certificate signed by unknown authority",
"oc patch configs.imageregistry.operator.openshift.io/cluster -p '{\"spec\":{\"readOnly\":true}}' --type=merge",
"service_account=USD(oc get -n openshift-image-registry -o jsonpath='{.spec.template.spec.serviceAccountName}' deploy/image-registry)",
"oc adm policy add-cluster-role-to-user system:image-pruner -z USD{service_account} -n openshift-image-registry",
"oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=check'",
"oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c 'REGISTRY_LOG_LEVEL=info /usr/bin/dockerregistry -prune=check'",
"time=\"2017-06-22T11:50:25.066156047Z\" level=info msg=\"start prune (dry-run mode)\" distribution_version=\"v2.4.1+unknown\" kubernetes_version=v1.6.1+USDFormat:%hUSD openshift_version=unknown time=\"2017-06-22T11:50:25.092257421Z\" level=info msg=\"Would delete blob: sha256:00043a2a5e384f6b59ab17e2c3d3a3d0a7de01b2cabeb606243e468acc663fa5\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:25.092395621Z\" level=info msg=\"Would delete blob: sha256:0022d49612807cb348cabc562c072ef34d756adfe0100a61952cbcb87ee6578a\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:25.092492183Z\" level=info msg=\"Would delete blob: sha256:0029dd4228961086707e53b881e25eba0564fa80033fbbb2e27847a28d16a37c\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.673946639Z\" level=info msg=\"Would delete blob: sha256:ff7664dfc213d6cc60fd5c5f5bb00a7bf4a687e18e1df12d349a1d07b2cf7663\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.674024531Z\" level=info msg=\"Would delete blob: sha256:ff7a933178ccd931f4b5f40f9f19a65be5eeeec207e4fad2a5bafd28afbef57e\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.674675469Z\" level=info msg=\"Would delete blob: sha256:ff9b8956794b426cc80bb49a604a0b24a1553aae96b930c6919a6675db3d5e06\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 Would delete 13374 blobs Would free up 2.835 GiB of disk space Use -prune=delete to actually delete the data",
"oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=delete'",
"Deleted 13374 blobs Freed up 2.835 GiB of disk space",
"oc patch configs.imageregistry.operator.openshift.io/cluster -p '{\"spec\":{\"readOnly\":false}}' --type=merge",
"oc idle <service>",
"oc idle --resource-names-file <filename>",
"oc scale --replicas=1 dc <dc_name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/building_applications/index |
Chapter 6. Messaging Channels | Chapter 6. Messaging Channels Abstract Messaging channels provide the plumbing for a messaging application. This chapter describes the different kinds of messaging channels available in a messaging system, and the roles that they play. 6.1. Point-to-Point Channel Overview A point-to-point channel , shown in Figure 6.1, "Point to Point Channel Pattern" is a message channel that guarantees that only one receiver consumes any given message. This is in contrast to a publish-subscribe channel , which allows multiple receivers to consume the same message. In particular, with a publish-subscribe channel, it is possible for multiple receivers to subscribe to the same channel. If more than one receiver competes to consume a message, it is up to the message channel to ensure that only one receiver actually consumes the message. Figure 6.1. Point to Point Channel Pattern Components that support point-to-point channel The following Apache Camel components support the point-to-point channel pattern: JMS ActiveMQ SEDA JPA XMPP JMS In JMS, a point-to-point channel is represented by a queue . For example, you can specify the endpoint URI for a JMS queue called Foo.Bar as follows: The qualifier, queue: , is optional, because the JMS component creates a queue endpoint by default. Therefore, you can also specify the following equivalent endpoint URI: See Jms in the Apache Camel Component Reference Guide for more details. ActiveMQ In ActiveMQ, a point-to-point channel is represented by a queue. For example, you can specify the endpoint URI for an ActiveMQ queue called Foo.Bar as follows: See ActiveMQ in the Apache Camel Component Reference Guide for more details. SEDA The Apache Camel Staged Event-Driven Architecture (SEDA) component is implemented using a blocking queue. Use the SEDA component if you want to create a lightweight point-to-point channel that is internal to the Apache Camel application. For example, you can specify an endpoint URI for a SEDA queue called SedaQueue as follows: JPA The Java Persistence API (JPA) component is an EJB 3 persistence standard that is used to write entity beans out to a database. See JPA in the Apache Camel Component Reference Guide for more details. XMPP The XMPP (Jabber) component supports the point-to-point channel pattern when it is used in the person-to-person mode of communication. See XMPP in the Apache Camel Component Reference Guide for more details. 6.2. Publish-Subscribe Channel Overview A publish-subscribe channel , shown in Figure 6.2, "Publish Subscribe Channel Pattern" , is a Section 5.2, "Message Channel" that enables multiple subscribers to consume any given message. This is in contrast with a Section 6.1, "Point-to-Point Channel" . Publish-subscribe channels are frequently used as a means of broadcasting events or notifications to multiple subscribers. Figure 6.2. Publish Subscribe Channel Pattern Components that support publish-subscribe channel The following Apache Camel components support the publish-subscribe channel pattern: JMS ActiveMQ XMPP SEDA for working with SEDA in the same CamelContext which can work in pub-sub, but allowing multiple consumers. see VM in the Apache Camel Component Reference Guide as SEDA, but for use within the same JVM. JMS In JMS, a publish-subscribe channel is represented by a topic . For example, you can specify the endpoint URI for a JMS topic called StockQuotes as follows: See Jms in the Apache Camel Component Reference Guide for more details. ActiveMQ In ActiveMQ, a publish-subscribe channel is represented by a topic. For example, you can specify the endpoint URI for an ActiveMQ topic called StockQuotes , as follows: See ActiveMQ in the Apache Camel Component Reference Guide for more details. XMPP The XMPP (Jabber) component supports the publish-subscribe channel pattern when it is used in the group communication mode. See Xmpp in the Apache Camel Component Reference Guide for more details. Static subscription lists If you prefer, you can also implement publish-subscribe logic within the Apache Camel application itself. A simple approach is to define a static subscription list , where the target endpoints are all explicitly listed at the end of the route. However, this approach is not as flexible as a JMS or ActiveMQ topic. Java DSL example The following Java DSL example shows how to simulate a publish-subscribe channel with a single publisher, seda:a , and three subscribers, seda:b , seda:c , and seda:d : Note This only works for the InOnly message exchange pattern. XML configuration example The following example shows how to configure the same route in XML: 6.3. Dead Letter Channel Overview The dead letter channel pattern, shown in Figure 6.3, "Dead Letter Channel Pattern" , describes the actions to take when the messaging system fails to deliver a message to the intended recipient. This includes such features as retrying delivery and, if delivery ultimately fails, sending the message to a dead letter channel, which archives the undelivered messages. Figure 6.3. Dead Letter Channel Pattern Creating a dead letter channel in Java DSL The following example shows how to create a dead letter channel using Java DSL: Where the errorHandler() method is a Java DSL interceptor, which implies that all of the routes defined in the current route builder are affected by this setting. The deadLetterChannel() method is a Java DSL command that creates a new dead letter channel with the specified destination endpoint, seda:errors . The errorHandler() interceptor provides a catch-all mechanism for handling all error types. If you want to apply a more fine-grained approach to exception handling, you can use the onException clauses instead(see the section called "onException clause" ). XML DSL example You can define a dead letter channel in the XML DSL, as follows: Redelivery policy Normally, you do not send a message straight to the dead letter channel, if a delivery attempt fails. Instead, you re-attempt delivery up to some maximum limit, and after all redelivery attempts fail you would send the message to the dead letter channel. To customize message redelivery, you can configure the dead letter channel to have a redelivery policy . For example, to specify a maximum of two redelivery attempts, and to apply an exponential backoff algorithm to the time delay between delivery attempts, you can configure the dead letter channel as follows: Where you set the redelivery options on the dead letter channel by invoking the relevant methods in a chain (each method in the chain returns a reference to the current RedeliveryPolicy object). Table 6.1, "Redelivery Policy Settings" summarizes the methods that you can use to set redelivery policies. Table 6.1. Redelivery Policy Settings Method Signature Default Description allowRedeliveryWhileStopping() true Controls whether redelivery is attempted during graceful shutdown or while a route is stopping. A delivery that is already in progress when stopping is initiated will not be interrupted. backOffMultiplier(double multiplier) 2 If exponential backoff is enabled, let m be the backoff multiplier and let d be the initial delay. The sequence of redelivery attempts are then timed as follows: collisionAvoidancePercent(double collisionAvoidancePercent) 15 If collision avoidance is enabled, let p be the collision avoidance percent. The collision avoidance policy then tweaks the delay by a random amount, up to plus/minus p% of its current value. deadLetterHandleNewException true Camel 2.15: Specifies whether or not to handle an exception that occurs while processing a message in the dead letter channel. If true , the exception is handled and a logged at the WARN level (so that the dead letter channel is guaranteed to complete). If false , the exception is not handled, so the dead letter channel fails, and propagates the new exception. delayPattern(String delayPattern) None Apache Camel 2.0: See the section called "Redeliver delay pattern" . disableRedelivery() true Apache Camel 2.0: Disables the redelivery feature. To enable redelivery, set maximumRedeliveries() to a positive integer value. handled(boolean handled) true Apache Camel 2.0: If true , the current exception is cleared when the message is moved to the dead letter channel; if false , the exception is propagated back to the client. initialRedeliveryDelay(long initialRedeliveryDelay) 1000 Specifies the delay (in milliseconds) before attempting the first redelivery. logNewException true Specifies whether to log at WARN level, when an exception is raised in the dead letter channel. logStackTrace(boolean logStackTrace) false Apache Camel 2.0: If true , the JVM stack trace is included in the error logs. maximumRedeliveries(int maximumRedeliveries) 0 Apache Camel 2.0: Maximum number of delivery attempts. maximumRedeliveryDelay(long maxDelay) 60000 Apache Camel 2.0: When using an exponential backoff strategy (see useExponentialBackOff() ), it is theoretically possible for the redelivery delay to increase without limit. This property imposes an upper limit on the redelivery delay (in milliseconds) onRedelivery(Processor processor) None Apache Camel 2.0: Configures a processor that gets called before every redelivery attempt. redeliveryDelay(long int) 0 Apache Camel 2.0: Specifies the delay (in milliseconds) between redelivery attempts. Apache Camel 2.16.0 : The default redelivery delay is one second. retriesExhaustedLogLevel(LoggingLevel logLevel) LoggingLevel.ERROR Apache Camel 2.0: Specifies the logging level at which to log delivery failure (specified as an org.apache.camel.LoggingLevel constant). retryAttemptedLogLevel(LoggingLevel logLevel) LoggingLevel.DEBUG Apache Camel 2.0: Specifies the logging level at which to redelivery attempts (specified as an org.apache.camel.LoggingLevel constant). useCollisionAvoidance() false Enables collision avoidence, which adds some randomization to the backoff timings to reduce contention probability. useOriginalMessage() false Apache Camel 2.0: If this feature is enabled, the message sent to the dead letter channel is a copy of the original message exchange, as it existed at the beginning of the route (in the from() node). useExponentialBackOff() false Enables exponential backoff. Redelivery headers If Apache Camel attempts to redeliver a message, it automatically sets the headers described in Table 6.2, "Dead Letter Redelivery Headers" on the In message. Table 6.2. Dead Letter Redelivery Headers Header Name Type Description CamelRedeliveryCounter Integer Apache Camel 2.0: Counts the number of unsuccessful delivery attempts. This value is also set in Exchange.REDELIVERY_COUNTER . CamelRedelivered Boolean Apache Camel 2.0: True, if one or more redelivery attempts have been made. This value is also set in Exchange.REDELIVERED . CamelRedeliveryMaxCounter Integer Apache Camel 2.6: Holds the maximum redelivery setting (also set in the Exchange.REDELIVERY_MAX_COUNTER exchange property). This header is absent if you use retryWhile or have unlimited maximum redelivery configured. Redelivery exchange properties If Apache Camel attempts to redeliver a message, it automatically sets the exchange properties described in Table 6.3, "Redelivery Exchange Properties" . Table 6.3. Redelivery Exchange Properties Exchange Property Name Type Description Exchange.FAILURE_ROUTE_ID String Provides the route ID of the route that failed. The literal name of this property is CamelFailureRouteId . Using the original message Available as of Apache Camel 2.0 Because an exchange object is subject to modification as it passes through the route, the exchange that is current when an exception is raised is not necessarily the copy that you would want to store in the dead letter channel. In many cases, it is preferable to log the message that arrived at the start of the route, before it was subject to any kind of transformation by the route. For example, consider the following route: The preceding route listen for incoming JMS messages and then processes the messages using the sequence of beans: validateOrder , transformOrder , and handleOrder . But when an error occurs, we do not know in which state the message is in. Did the error happen before the transformOrder bean or after? We can ensure that the original message from jms:queue:order:input is logged to the dead letter channel by enabling the useOriginalMessage option as follows: Redeliver delay pattern Available as of Apache Camel 2.0 The delayPattern option is used to specify delays for particular ranges of the redelivery count. The delay pattern has the following syntax: limit1 : delay1 ; limit2 : delay2 ; limit3 : delay3 ;... , where each delayN is applied to redeliveries in the range limitN β redeliveryCount < limitN+1 For example, consider the pattern, 5:1000;10:5000;20:20000 , which defines three groups and results in the following redelivery delays: Attempt number 1..4 = 0 milliseconds (as the first group starts with 5). Attempt number 5..9 = 1000 milliseconds (the first group). Attempt number 10..19 = 5000 milliseconds (the second group). Attempt number 20.. = 20000 milliseconds (the last group). You can start a group with limit 1 to define a starting delay. For example, 1:1000;5:5000 results in the following redelivery delays: Attempt number 1..4 = 1000 millis (the first group) Attempt number 5.. = 5000 millis (the last group) There is no requirement that the delay should be higher than the and you can use any delay value you like. For example, the delay pattern, 1:5000;3:1000 , starts with a 5 second delay and then reduces the delay to 1 second. Which endpoint failed? When Apache Camel routes messages, it updates an Exchange property that contains the last endpoint the Exchange was sent to. Hence, you can obtain the URI for the current exchange's most recent destination using the following code: Where Exchange.TO_ENDPOINT is a string constant equal to CamelToEndpoint . This property is updated whenever Camel sends a message to any endpoint. If an error occurs during routing and the exchange is moved into the dead letter queue, Apache Camel will additionally set a property named CamelFailureEndpoint , which identifies the last destination the exchange was sent to before the error occured. Hence, you can access the failure endpoint from within a dead letter queue using the following code: Where Exchange.FAILURE_ENDPOINT is a string constant equal to CamelFailureEndpoint . Note These properties remain set in the current exchange, even if the failure occurs after the given destination endpoint has finished processing. For example, consider the following route: Now suppose that a failure happens in the foo bean. In this case the Exchange.TO_ENDPOINT property and the Exchange.FAILURE_ENDPOINT property still contain the value. onRedelivery processor When a dead letter channel is performing redeliveries, it is possible to configure a Processor that is executed just before every redelivery attempt. This can be used for situations where you need to alter the message before it is redelivered. For example, the following dead letter channel is configured to call the MyRedeliverProcessor before redelivering exchanges: Where the MyRedeliveryProcessor process is implemented as follows: Control redelivery during shutdown or stopping If you stop a route or initiate graceful shutdown, the default behavior of the error handler is to continue attempting redelivery. Because this is typically not the desired behavior, you have the option of disabling redelivery during shutdown or stopping, by setting the allowRedeliveryWhileStopping option to false , as shown in the following example: Note The allowRedeliveryWhileStopping option is true by default, for backwards compatibility reasons. During aggressive shutdown, however, redelivery is always suppressed, irrespective of this option setting (for example, after graceful shutdown has timed out). Using onExceptionOccurred Processor Dead Letter channel supports the onExceptionOccurred processor to allow the custom processing of a message, after an exception occurs. You can use it for custom logging too. Any new exceptions thrown from the onExceptionOccurred processor is logged as WARN and ignored, not to override the existing exception. The difference between the onRedelivery processor and onExceptionOccurred processor is you can process the former exactly before the redelivery attempt. However, it does not happen immediately after an exception occurs. For example, If you configure the error handler to do five seconds delay between the redelivery attempts, then the redelivery processor is invoked five seconds later, after an exception occurs. The following example explains how to do the custom logging when an exception occurs. You need to configure the onExceptionOccurred to use the custom processor. onException clause Instead of using the errorHandler() interceptor in your route builder, you can define a series of onException() clauses that define different redelivery policies and different dead letter channels for various exception types. For example, to define distinct behavior for each of the NullPointerException , IOException , and Exception types, you can define the following rules in your route builder using Java DSL: Where the redelivery options are specified by chaining the redelivery policy methods (as listed in Table 6.1, "Redelivery Policy Settings" ), and you specify the dead letter channel's endpoint using the to() DSL command. You can also call other Java DSL commands in the onException() clauses. For example, the preceding example calls setHeader() to record some error details in a message header named, messageInfo . In this example, the NullPointerException and the IOException exception types are configured specially. All other exception types are handled by the generic Exception exception interceptor. By default, Apache Camel applies the exception interceptor that most closely matches the thrown exception. If it fails to find an exact match, it tries to match the closest base type, and so on. Finally, if no other interceptor matches, the interceptor for the Exception type matches all remaining exceptions. OnPrepareFailure Before you pass the exchange to the dead letter queue, you can use the onPrepare option to allow a custom processor to prepare the exchange. It enables you to add information about the exchange, such as the cause of exchange failure. For example, the following processor adds a header with the exception message. You can configue the error handler to use the processor as follows. However, the onPrepare option is also available using the default error handler. 6.4. Guaranteed Delivery Overview Guaranteed delivery means that once a message is placed into a message channel, the messaging system guarantees that the message will reach its destination, even if parts of the application should fail. In general, messaging systems implement the guaranteed delivery pattern, shown in Figure 6.4, "Guaranteed Delivery Pattern" , by writing messages to persistent storage before attempting to deliver them to their destination. Figure 6.4. Guaranteed Delivery Pattern Components that support guaranteed delivery The following Apache Camel components support the guaranteed delivery pattern: JMS ActiveMQ ActiveMQ Journal File Component in the Apache Camel Component Reference Guide JMS In JMS, the deliveryPersistent query option indicates whether or not persistent storage of messages is enabled. Usually it is unnecessary to set this option, because the default behavior is to enable persistent delivery. To configure all the details of guaranteed delivery, it is necessary to set configuration options on the JMS provider. These details vary, depending on what JMS provider you are using. For example, MQSeries, TibCo, BEA, Sonic, and others, all provide various qualities of service to support guaranteed delivery. See Jms in the Apache Camel Component Reference Guide > for more details. ActiveMQ In ActiveMQ, message persistence is enabled by default. From version 5 onwards, ActiveMQ uses the AMQ message store as the default persistence mechanism. There are several different approaches you can use to enabe message persistence in ActiveMQ. The simplest option (different from Figure 6.4, "Guaranteed Delivery Pattern" ) is to enable persistence in a central broker and then connect to that broker using a reliable protocol. After a message is been sent to the central broker, delivery to consumers is guaranteed. For example, in the Apache Camel configuration file, META-INF/spring/camel-context.xml , you can configure the ActiveMQ component to connect to the central broker using the OpenWire/TCP protocol as follows: If you prefer to implement an architecture where messages are stored locally before being sent to a remote endpoint (similar to Figure 6.4, "Guaranteed Delivery Pattern" ), you do this by instantiating an embedded broker in your Apache Camel application. A simple way to achieve this is to use the ActiveMQ Peer-to-Peer protocol, which implicitly creates an embedded broker to communicate with other peer endpoints. For example, in the camel-context.xml configuration file, you can configure the ActiveMQ component to connect to all of the peers in group, GroupA , as follows: Where broker1 is the broker name of the embedded broker (other peers in the group should use different broker names). One limiting feature of the Peer-to-Peer protocol is that it relies on IP multicast to locate the other peers in its group. This makes it unsuitable for use in wide area networks (and in some local area networks that do not have IP multicast enabled). A more flexible way to create an embedded broker in the ActiveMQ component is to exploit ActiveMQ's VM protocol, which connects to an embedded broker instance. If a broker of the required name does not already exist, the VM protocol automatically creates one. You can use this mechanism to create an embedded broker with custom configuration. For example: Where activemq.xml is an ActiveMQ file which configures the embedded broker instance. Within the ActiveMQ configuration file, you can choose to enable one of the following persistence mechanisms: AMQ persistence(the default) - A fast and reliable message store that is native to ActiveMQ. For details, see amqPersistenceAdapter and AMQ Message Store . JDBC persistence - Uses JDBC to store messages in any JDBC-compatible database. For details, see jdbcPersistenceAdapter and ActiveMQ Persistence . Journal persistence - A fast persistence mechanism that stores messages in a rolling log file. For details, see journalPersistenceAdapter and ActiveMQ Persistence . Kaha persistence - A persistence mechanism developed specifically for ActiveMQ. For details, see kahaPersistenceAdapter and ActiveMQ Persistence . See ActiveMQ in the Apache Camel Component Reference Guide for more details. ActiveMQ Journal The ActiveMQ Journal component is optimized for a special use case where multiple, concurrent producers write messages to queues, but there is only one active consumer. Messages are stored in rolling log files and concurrent writes are aggregated to boost efficiency. 6.5. Message Bus Overview Message bus refers to a messaging architecture, shown in Figure 6.5, "Message Bus Pattern" , that enables you to connect diverse applications running on diverse computing platforms. In effect, the Apache Camel and its components constitute a message bus. Figure 6.5. Message Bus Pattern The following features of the message bus pattern are reflected in Apache Camel: Common communication infrastructure - The router itself provides the core of the common communication infrastructure in Apache Camel. However, in contrast to some message bus architectures, Apache Camel provides a heterogeneous infrastructure: messages can be sent into the bus using a wide variety of different transports and using a wide variety of different message formats. Adapters - Where necessary, Apache Camel can translate message formats and propagate messages using different transports. In effect, Apache Camel is capable of behaving like an adapter, so that external applications can hook into the message bus without refactoring their messaging protocols. In some cases, it is also possible to integrate an adapter directly into an external application. For example, if you develop an application using Apache CXF, where the service is implemented using JAX-WS and JAXB mappings, it is possible to bind a variety of different transports to the service. These transport bindings function as adapters. | [
"jms:queue:Foo.Bar",
"jms:Foo.Bar",
"activemq:queue:Foo.Bar",
"seda:SedaQueue",
"jms:topic:StockQuotes",
"activemq:topic:StockQuotes",
"from(\"seda:a\").to(\"seda:b\", \"seda:c\", \"seda:d\");",
"<camelContext id=\"buildStaticRecipientList\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"seda:a\"/> <to uri=\"seda:b\"/> <to uri=\"seda:c\"/> <to uri=\"seda:d\"/> </route> </camelContext>",
"errorHandler(deadLetterChannel(\"seda:errors\")); from(\"seda:a\").to(\"seda:b\");",
"<route errorHandlerRef=\"myDeadLetterErrorHandler\"> </route> <bean id=\"myDeadLetterErrorHandler\" class=\"org.apache.camel.builder.DeadLetterChannelBuilder\"> <property name=\"deadLetterUri\" value=\"jms:queue:dead\"/> <property name=\"redeliveryPolicy\" ref=\"myRedeliveryPolicyConfig\"/> </bean> <bean id=\"myRedeliveryPolicyConfig\" class=\"org.apache.camel.processor.RedeliveryPolicy\"> <property name=\"maximumRedeliveries\" value=\"3\"/> <property name=\"redeliveryDelay\" value=\"5000\"/> </bean>",
"errorHandler(deadLetterChannel(\"seda:errors\").maximumRedeliveries(2).useExponentialBackOff()); from(\"seda:a\").to(\"seda:b\");",
"d, m*d, m*m*d, m*m*m*d,",
"from(\"jms:queue:order:input\") .to(\"bean:validateOrder\"); .to(\"bean:transformOrder\") .to(\"bean:handleOrder\");",
"// will use original body errorHandler(deadLetterChannel(\"jms:queue:dead\") .useOriginalMessage().maximumRedeliveries(5).redeliveryDelay(5000);",
"// Java String lastEndpointUri = exchange.getProperty(Exchange.TO_ENDPOINT, String.class);",
"// Java String failedEndpointUri = exchange.getProperty(Exchange.FAILURE_ENDPOINT, String.class);",
"from(\"activemq:queue:foo\") .to(\"http://someserver/somepath\") .beanRef(\"foo\");",
"// we configure our Dead Letter Channel to invoke // MyRedeliveryProcessor before a redelivery is // attempted. This allows us to alter the message before errorHandler(deadLetterChannel(\"mock:error\").maximumRedeliveries(5) .onRedelivery(new MyRedeliverProcessor()) // setting delay to zero is just to make unit teting faster .redeliveryDelay(0L));",
"// This is our processor that is executed before every redelivery attempt // here we can do what we want in the java code, such as altering the message public class MyRedeliverProcessor implements Processor { public void process(Exchange exchange) throws Exception { // the message is being redelivered so we can alter it // we just append the redelivery counter to the body // you can of course do all kind of stuff instead String body = exchange.getIn().getBody(String.class); int count = exchange.getIn().getHeader(Exchange.REDELIVERY_COUNTER, Integer.class); exchange.getIn().setBody(body + count); // the maximum redelivery was set to 5 int max = exchange.getIn().getHeader(Exchange.REDELIVERY_MAX_COUNTER, Integer.class); assertEquals(5, max); } }",
"errorHandler(deadLetterChannel(\"jms:queue:dead\") .allowRedeliveryWhileStopping(false) .maximumRedeliveries(20) .redeliveryDelay(1000) .retryAttemptedLogLevel(LoggingLevel.INFO));",
"errorHandler(defaultErrorHandler().maximumRedeliveries(3).redeliveryDelay(5000).onExceptionOccurred(myProcessor));",
"onException(NullPointerException.class) .maximumRedeliveries(1) .setHeader(\"messageInfo\", \"Oh dear! An NPE.\") .to(\"mock:npe_error\"); onException(IOException.class) .initialRedeliveryDelay(5000L) .maximumRedeliveries(3) .backOffMultiplier(1.0) .useExponentialBackOff() .setHeader(\"messageInfo\", \"Oh dear! Some kind of I/O exception.\") .to(\"mock:io_error\"); onException(Exception.class) .initialRedeliveryDelay(1000L) .maximumRedeliveries(2) .setHeader(\"messageInfo\", \"Oh dear! An exception.\") .to(\"mock:error\"); from(\"seda:a\").to(\"seda:b\");",
"public class MyPrepareProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Exception cause = exchange.getProperty(Exchange.EXCEPTION_CAUGHT, Exception.class); exchange.getIn().setHeader(\"FailedBecause\", cause.getMessage()); } }",
"errorHandler(deadLetterChannel(\"jms:dead\").onPrepareFailure(new MyPrepareProcessor()));",
"<bean id=\"myPrepare\" class=\"org.apache.camel.processor.DeadLetterChannelOnPrepareTest.MyPrepareProcessor\"/> <errorHandler id=\"dlc\" type=\"DeadLetterChannel\" deadLetterUri=\"jms:dead\" onPrepareFailureRef=\"myPrepare\"/>",
"<beans ... > <bean id=\"activemq\" class=\"org.apache.activemq.camel.component.ActiveMQComponent\"> <property name=\"brokerURL\" value=\"tcp://somehost:61616\"/> </bean> </beans>",
"<beans ... > <bean id=\"activemq\" class=\"org.apache.activemq.camel.component.ActiveMQComponent\"> <property name=\"brokerURL\" value=\"peer://GroupA/broker1\"/> </bean> </beans>",
"<beans ... > <bean id=\"activemq\" class=\"org.apache.activemq.camel.component.ActiveMQComponent\"> <property name=\"brokerURL\" value=\"vm://broker1?brokerConfig=xbean:activemq.xml\"/> </bean> </beans>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/msgch |
CLI reference | CLI reference Red Hat Enterprise Linux AI 1.3 RHEL AI command line interface (CLI) reference Red Hat RHEL AI Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.3/html/cli_reference/index |
Chapter 1. Overview | Chapter 1. Overview AMQ Broker configuration files define important settings for a broker instance. By editing a broker's configuration files, you can control how the broker operates in your environment. 1.1. AMQ Broker configuration files and locations All of a broker's configuration files are stored in <broker_instance_dir> /etc . You can configure a broker by editing the settings in these configuration files. Each broker instance uses the following configuration files: broker.xml The main configuration file. You use this file to configure most aspects of the broker, such as network connections, security settings, message addresses, and so on. bootstrap.xml The file that AMQ Broker uses to start a broker instance. You use it to change the location of broker.xml , configure the web server, and set some security settings. logging.properties You use this file to set logging properties for the broker instance. artemis.profile You use this file to set environment variables used while the broker instance is running. login.config , artemis-users.properties , artemis-roles.properties Security-related files. You use these files to set up authentication for user access to the broker instance. 1.2. Understanding the default broker configuration You configure most of a broker's functionality by editing the broker.xml configuration file. This file contains default settings, which are sufficient to start and operate a broker. However, you will likely need to change some of the default settings and add new settings to configure the broker for your environment. By default, broker.xml contains default settings for the following functionality: Message persistence Acceptors Security Message addresses Default message persistence settings By default, AMQ Broker persistence uses an append-only file journal that consists of a set of files on disk. The journal saves messages, transactions, and other information. <configuration ...> <core ...> ... <persistence-enabled>true</persistence-enabled> <!-- this could be ASYNCIO, MAPPED, NIO ASYNCIO: Linux Libaio MAPPED: mmap files NIO: Plain Java Files --> <journal-type>ASYNCIO</journal-type> <paging-directory>data/paging</paging-directory> <bindings-directory>data/bindings</bindings-directory> <journal-directory>data/journal</journal-directory> <large-messages-directory>data/large-messages</large-messages-directory> <journal-datasync>true</journal-datasync> <journal-min-files>2</journal-min-files> <journal-pool-files>10</journal-pool-files> <journal-file-size>10M</journal-file-size> <!-- This value was determined through a calculation. Your system could perform 8.62 writes per millisecond on the current journal configuration. That translates as a sync write every 115999 nanoseconds. Note: If you specify 0 the system will perform writes directly to the disk. We recommend this to be 0 if you are using journalType=MAPPED and journal-datasync=false. --> <journal-buffer-timeout>115999</journal-buffer-timeout> <!-- When using ASYNCIO, this will determine the writing queue depth for libaio. --> <journal-max-io>4096</journal-max-io> <!-- how often we are looking for how many bytes are being used on the disk in ms --> <disk-scan-period>5000</disk-scan-period> <!-- once the disk hits this limit the system will block, or close the connection in certain protocols that won't support flow control. --> <max-disk-usage>90</max-disk-usage> <!-- should the broker detect dead locks and other issues --> <critical-analyzer>true</critical-analyzer> <critical-analyzer-timeout>120000</critical-analyzer-timeout> <critical-analyzer-check-period>60000</critical-analyzer-check-period> <critical-analyzer-policy>HALT</critical-analyzer-policy> ... </core> </configuration> Default acceptor settings Brokers listen for incoming client connections by using an acceptor configuration element to define the port and protocols a client can use to make connections. By default, AMQ Broker includes an acceptor for each supported messaging protocol, as shown below. <configuration ...> <core ...> ... <acceptors> <!-- Acceptor for every supported protocol --> <acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> <!-- AMQP Acceptor. Listens on default AMQP port for AMQP traffic --> <acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> <!-- STOMP Acceptor --> <acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor> <!-- HornetQ Compatibility Acceptor. Enables HornetQ Core and STOMP for legacy HornetQ clients. --> <acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor> <!-- MQTT Acceptor --> <acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor> </acceptors> ... </core> </configuration> Default security settings AMQ Broker contains a flexible role-based security model for applying security to queues, based on their addresses. The default configuration uses wildcards to apply the amq role to all addresses (represented by the number sign, # ). <configuration ...> <core ...> ... <security-settings> <security-setting match="#"> <permission type="createNonDurableQueue" roles="amq"/> <permission type="deleteNonDurableQueue" roles="amq"/> <permission type="createDurableQueue" roles="amq"/> <permission type="deleteDurableQueue" roles="amq"/> <permission type="createAddress" roles="amq"/> <permission type="deleteAddress" roles="amq"/> <permission type="consume" roles="amq"/> <permission type="browse" roles="amq"/> <permission type="send" roles="amq"/> <!-- we need this otherwise ./artemis data imp wouldn't work --> <permission type="manage" roles="amq"/> </security-setting> </security-settings> ... </core> </configuration> Default message address settings AMQ Broker includes a default address that establishes a default set of configuration settings to be applied to any created queue or topic. Additionally, the default configuration defines two queues: DLQ (Dead Letter Queue) handles messages that arrive with no known destination, and Expiry Queue holds messages that have lived past their expiration and therefore should not be routed to their original destination. <configuration ...> <core ...> ... <address-settings> ... <!--default for catch all--> <address-setting match="#"> <dead-letter-address>DLQ</dead-letter-address> <expiry-address>ExpiryQueue</expiry-address> <redelivery-delay>0</redelivery-delay> <!-- with -1 only the global-max-size is in use for limiting --> <max-size-bytes>-1</max-size-bytes> <message-counter-history-day-limit>10</message-counter-history-day-limit> <address-full-policy>PAGE</address-full-policy> <auto-create-queues>true</auto-create-queues> <auto-create-addresses>true</auto-create-addresses> <auto-create-jms-queues>true</auto-create-jms-queues> <auto-create-jms-topics>true</auto-create-jms-topics> </address-setting> </address-settings> <addresses> <address name="DLQ"> <anycast> <queue name="DLQ" /> </anycast> </address> <address name="ExpiryQueue"> <anycast> <queue name="ExpiryQueue" /> </anycast> </address> </addresses> </core> </configuration> 1.3. Reloading configuration updates By default, a broker checks for changes in the configuration files every 5000 milliseconds. If the broker detects a change in the "last modified" time stamp of the configuration file, the broker determines that a configuration change took place. In this case, the broker reloads the configuration file to activate the changes. When the broker reloads the broker.xml configuration file, it reloads the following modules: Address settings and queues When the configuration file is reloaded, the address settings determine how to handle addresses and queues that have been deleted from the configuration file. You can set this with the config-delete-addresses and config-delete-queues properties. For more information, see Appendix B, Address Setting Configuration Elements . Security settings SSL/TLS keystores and truststores on an existing acceptor can be reloaded to establish new certificates without any impact to existing clients. Connected clients, even those with older or differing certificates, can continue to send and receive messages. The certificate revocation list file, which is configured by using the crlPath parameter, can also be reloaded. Diverts A configuration reload deploys any new divert that you have added. However, removal of a divert from the configuration or a change to a sub-element within a <divert> element do not take effect until you restart the broker. The following procedure shows how to change the interval at which the broker checks for changes to the broker.xml configuration file. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the <core> element, add the <configuration-file-refresh-period> element and set the refresh period (in milliseconds). This example sets the configuration refresh period to be 60000 milliseconds: <configuration> <core> ... <configuration-file-refresh-period>60000</configuration-file-refresh-period> ... </core> </configuration> It is also possible to force the reloading of the configuration file using the Management API or the console if for some reason access to the configuration file is not possible. Configuration files can be reloaded using the management operation reloadConfigurationFile() on the ActiveMQServerControl (with the ObjectName org.apache.activemq.artemis:broker=" BROKER_NAME " or the resource name server ) Additional resources To learn how to use the management API, see Using the Management API in Managing AMQ Broker 1.4. Modularizing the broker configuration file If you have multiple brokers that share common configuration settings, you can define the common configuration in separate files, and then include these files in each broker's broker.xml configuration file. The most common configuration settings that you might share between brokers include: Addresses Address settings Security settings Procedure Create a separate XML file for each broker.xml section that you want to share. Each XML file can only include a single section from broker.xml (for example, either addresses or address settings, but not both). The top-level element must also define the element namespace ( xmlns="urn:activemq:core" ). This example shows a security settings configuration defined in my-security-settings.xml : my-security-settings.xml <security-settings xmlns="urn:activemq:core"> <security-setting match="a1"> <permission type="createNonDurableQueue" roles="a1.1"/> </security-setting> <security-setting match="a2"> <permission type="deleteNonDurableQueue" roles="a2.1"/> </security-setting> </security-settings> Open the <broker_instance_dir> /etc/broker.xml configuration file for each broker that should use the common configuration settings. For each broker.xml file that you opened, do the following: In the <configuration> element at the beginning of broker.xml , verify that the following line appears: xmlns:xi="http://www.w3.org/2001/XInclude" Add an XML inclusion for each XML file that contains shared configuration settings. This example includes the my-security-settings.xml file. broker.xml <configuration ...> <core ...> ... <xi:include href="/opt/my-broker-config/my-security-settings.xml"/> ... </core> </configuration> If desired, validate broker.xml to verify that the XML is valid against the schema. You can use any XML validator program. This example uses xmllint to validate broker.xml against the artemis-server.xsl schema. Additional resources For more information about XML Inclusions (XIncludes), see https://www.w3.org/TR/xinclude/ . 1.4.1. Reloading modular configuration files When the broker periodically checks for configuration changes (according to the frequency specified by configuration-file-refresh-period ), it does not automatically detect changes made to configuration files that are included in the broker.xml configuration file via xi:include . For example, if broker.xml includes my-address-settings.xml and you make configuration changes to my-address-settings.xml , the broker does not automatically detect the changes in my-address-settings.xml and reload the configuration. To force a reload of the broker.xml configuration file and any modified configuration files included within it, you must ensure that the "last modified" time stamp of the broker.xml configuration file has changed. You can use a standard Linux touch command to update the last-modified time stamp of broker.xml without making any other changes. For example: Alternatively you can use the management API to force a reload of the Broker. Configuration files can be reloaded using the management operation reloadConfigurationFile() on the ActiveMQServerControl (with the ObjectName org.apache.activemq.artemis:broker=" BROKER_NAME " or the resource name server ) Additional resources To learn how to use the management API, see Using the Management API in Managing AMQ Broker 1.4.2. Disabling External XML Entity (XXE) processing If you don't want to modularize your broker configuration in separate files that are included in the broker.xml file, you can disable XXE processing to protect AMQ Broker against XXE security vulnerabilities. If you don't have a modular broker configuration, Red Hat recommends that you disable XXE processing. Procedure Open the <broker_instance_dir>/etc/artemis.profile file. Add a new argument, -Dartemis.disableXxe , to the JAVA_ARGS list of Java system arguments. -Dartemis.disableXxe=true Save the artemis.profile file. 1.5. Extending the JAVA Classpath By default, JAR files in the <broker_instance_dir> /lib directory are loaded at runtime because the directory is part of the Java classpath. If you want AMQ Broker to load JAR files from a directory other than <broker_instance_dir> /lib , you must add that directory to the Java classpath. To add a directory to the Java class path, you can use either of the following methods: In the <broker_instance_dir>/etc/artemis.profile file, add a new property, artemis.extra.libs to the JAVA_ARGS list of system properties. Set the ARTEMIS_EXTRA_LIBS environment variable. The following are examples of comma-separated lists of directories that are added to the Java Classpath by using both methods: -Dartemis.extra.libs=/usr/local/share/java/lib1,/usr/local/share/java/lib2 export ARTEMIS_EXTRA_LIBS=/usr/local/share/java/lib1,/usr/local/share/java/lib2 Note The ARTEMIS_EXTRA_LIBS environment variable is ignored if the artemis.extra.libs Java system property is configured in the <broker_instance_dir>/etc/artemis.profile file. 1.6. Document conventions This document uses the following conventions for the sudo command, file paths, and replaceable values. The sudo command In this document, sudo is used for any command that requires root privileges. You should always exercise caution when using sudo , as any changes can affect the entire system. For more information about using sudo , see Managing sudo access . About the use of file paths in this document In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/... ). If you are using Microsoft Windows, you should use the equivalent Microsoft Windows paths (for example, C:\Users\... ). Replaceable values This document sometimes uses replaceable values that you must replace with values specific to your environment. Replaceable values are lowercase, enclosed by angle brackets ( < > ), and are styled using italics and monospace font. Multiple words are separated by underscores ( _ ) . For example, in the following command, replace <install_dir> with your own directory name. USD <install_dir> /bin/artemis create mybroker | [
"<configuration ...> <core ...> <persistence-enabled>true</persistence-enabled> <!-- this could be ASYNCIO, MAPPED, NIO ASYNCIO: Linux Libaio MAPPED: mmap files NIO: Plain Java Files --> <journal-type>ASYNCIO</journal-type> <paging-directory>data/paging</paging-directory> <bindings-directory>data/bindings</bindings-directory> <journal-directory>data/journal</journal-directory> <large-messages-directory>data/large-messages</large-messages-directory> <journal-datasync>true</journal-datasync> <journal-min-files>2</journal-min-files> <journal-pool-files>10</journal-pool-files> <journal-file-size>10M</journal-file-size> <!-- This value was determined through a calculation. Your system could perform 8.62 writes per millisecond on the current journal configuration. That translates as a sync write every 115999 nanoseconds. Note: If you specify 0 the system will perform writes directly to the disk. We recommend this to be 0 if you are using journalType=MAPPED and journal-datasync=false. --> <journal-buffer-timeout>115999</journal-buffer-timeout> <!-- When using ASYNCIO, this will determine the writing queue depth for libaio. --> <journal-max-io>4096</journal-max-io> <!-- how often we are looking for how many bytes are being used on the disk in ms --> <disk-scan-period>5000</disk-scan-period> <!-- once the disk hits this limit the system will block, or close the connection in certain protocols that won't support flow control. --> <max-disk-usage>90</max-disk-usage> <!-- should the broker detect dead locks and other issues --> <critical-analyzer>true</critical-analyzer> <critical-analyzer-timeout>120000</critical-analyzer-timeout> <critical-analyzer-check-period>60000</critical-analyzer-check-period> <critical-analyzer-policy>HALT</critical-analyzer-policy> </core> </configuration>",
"<configuration ...> <core ...> <acceptors> <!-- Acceptor for every supported protocol --> <acceptor name=\"artemis\">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> <!-- AMQP Acceptor. Listens on default AMQP port for AMQP traffic --> <acceptor name=\"amqp\">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> <!-- STOMP Acceptor --> <acceptor name=\"stomp\">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor> <!-- HornetQ Compatibility Acceptor. Enables HornetQ Core and STOMP for legacy HornetQ clients. --> <acceptor name=\"hornetq\">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor> <!-- MQTT Acceptor --> <acceptor name=\"mqtt\">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor> </acceptors> </core> </configuration>",
"<configuration ...> <core ...> <security-settings> <security-setting match=\"#\"> <permission type=\"createNonDurableQueue\" roles=\"amq\"/> <permission type=\"deleteNonDurableQueue\" roles=\"amq\"/> <permission type=\"createDurableQueue\" roles=\"amq\"/> <permission type=\"deleteDurableQueue\" roles=\"amq\"/> <permission type=\"createAddress\" roles=\"amq\"/> <permission type=\"deleteAddress\" roles=\"amq\"/> <permission type=\"consume\" roles=\"amq\"/> <permission type=\"browse\" roles=\"amq\"/> <permission type=\"send\" roles=\"amq\"/> <!-- we need this otherwise ./artemis data imp wouldn't work --> <permission type=\"manage\" roles=\"amq\"/> </security-setting> </security-settings> </core> </configuration>",
"<configuration ...> <core ...> <address-settings> <!--default for catch all--> <address-setting match=\"#\"> <dead-letter-address>DLQ</dead-letter-address> <expiry-address>ExpiryQueue</expiry-address> <redelivery-delay>0</redelivery-delay> <!-- with -1 only the global-max-size is in use for limiting --> <max-size-bytes>-1</max-size-bytes> <message-counter-history-day-limit>10</message-counter-history-day-limit> <address-full-policy>PAGE</address-full-policy> <auto-create-queues>true</auto-create-queues> <auto-create-addresses>true</auto-create-addresses> <auto-create-jms-queues>true</auto-create-jms-queues> <auto-create-jms-topics>true</auto-create-jms-topics> </address-setting> </address-settings> <addresses> <address name=\"DLQ\"> <anycast> <queue name=\"DLQ\" /> </anycast> </address> <address name=\"ExpiryQueue\"> <anycast> <queue name=\"ExpiryQueue\" /> </anycast> </address> </addresses> </core> </configuration>",
"<configuration> <core> <configuration-file-refresh-period>60000</configuration-file-refresh-period> </core> </configuration>",
"<security-settings xmlns=\"urn:activemq:core\"> <security-setting match=\"a1\"> <permission type=\"createNonDurableQueue\" roles=\"a1.1\"/> </security-setting> <security-setting match=\"a2\"> <permission type=\"deleteNonDurableQueue\" roles=\"a2.1\"/> </security-setting> </security-settings>",
"xmlns:xi=\"http://www.w3.org/2001/XInclude\"",
"<configuration ...> <core ...> <xi:include href=\"/opt/my-broker-config/my-security-settings.xml\"/> </core> </configuration>",
"xmllint --noout --xinclude --schema /opt/redhat/amq-broker/amq-broker-7.2.0/schema/artemis-server.xsd /var/opt/amq-broker/mybroker/etc/broker.xml /var/opt/amq-broker/mybroker/etc/broker.xml validates",
"touch -m <broker_instance_dir> /etc/broker.xml",
"-Dartemis.disableXxe=true",
"-Dartemis.extra.libs=/usr/local/share/java/lib1,/usr/local/share/java/lib2",
"export ARTEMIS_EXTRA_LIBS=/usr/local/share/java/lib1,/usr/local/share/java/lib2",
"<install_dir> /bin/artemis create mybroker"
] | https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/configuring_amq_broker/overview-configuring |
Chapter 2. Using Dev Spaces in team workflow | Chapter 2. Using Dev Spaces in team workflow Learn about the benefits of using OpenShift Dev Spaces in your organization in the following articles: Section 2.1, "Badge for first-time contributors" Section 2.2, "Reviewing pull and merge requests" 2.1. Badge for first-time contributors To enable a first-time contributor to start a workspace with a project, add a badge with a link to your OpenShift Dev Spaces instance. Figure 2.1. Factory badge Procedure Substitute your OpenShift Dev Spaces URL ( https:// <openshift_dev_spaces_fqdn> ) and repository URL ( <your_repository_url> ), and add the link to your repository in the project README.md file. The README.md file in your Git provider web interface displays the factory badge. Click the badge to open a workspace with your project in your OpenShift Dev Spaces instance. 2.2. Reviewing pull and merge requests Red Hat OpenShift Dev Spaces workspace contains all tools you need to review pull and merge requests from start to finish. By clicking a OpenShift Dev Spaces link, you get access to Red Hat OpenShift Dev Spaces-supported web IDE with a ready-to-use workspace where you can run a linter, unit tests, the build and more. Prerequisites You have access to the repository hosted by your Git provider. You have access to a OpenShift Dev Spaces instance. Procedure Open the feature branch to review in OpenShift Dev Spaces. A clone of the branch opens in a workspace with tools for debugging and testing. Check the pull or merge request changes. Run your desired debugging and testing tools: Run a linter. Run unit tests. Run the build. Run the application to check for problems. Navigate to UI of your Git provider to leave comment and pull or merge your assigned request. Verification (optional) Open a second workspace using the main branch of the repository to reproduce a problem. 2.3. Try in Web IDE GitHub action The Try in Web IDE GitHub action can be added to a GitHub repository workflow to help reviewers quickly test pull requests on Eclipse Che hosted by Red Hat. The action achieves this by listening to pull request events and providing a factory URL by creating a comment, a status check, or both. This factory URL creates a new workspace from the pull request branch on Eclipse Che hosted by Red Hat. Note The Che documentation repository ( https://github.com/eclipse/che-docs ) is a real-life example where the Try in Web IDE GitHub action helps reviewers quickly test pull requests. Experience the workflow by navigating to a recent pull request and opening a factory URL. Figure 2.2. Pull request comment created by the Try in Web IDE GitHub action. Clicking the badge opens a new workspace for reviewers to test the pull request. Figure 2.3. Pull request status check created by the Try in Web IDE GitHub action. Clicking the "Details" link opens a new workspace for reviewers to test the pull request. 2.3.1. Adding the action to a GitHub repository workflow This section describes how to integrate the Try in Web IDE GitHub action to a GitHub repository workflow. Prerequisites A GitHub repository A devfile in the root of the GitHub repository. Procedure In the GitHub repository, create a .github/workflows directory if it does not exist already. Create an example.yml file in the .github/workflows directory with the following content: Example 2.1. example.yml name: Try in Web IDE example on: pull_request_target: types: [opened] jobs: add-link: runs-on: ubuntu-20.04 steps: - name: Web IDE Pull Request Check id: try-in-web-ide uses: redhat-actions/try-in-web-ide@v1 with: # GitHub action inputs # required github_token: USD{{ secrets.GITHUB_TOKEN }} # optional - defaults to true add_comment: true # optional - defaults to true add_status: true This code snippet creates a workflow named Try in Web IDE example , with a job that runs the v1 version of the redhat-actions/try-in-web-ide community action. The workflow is triggered on the pull_request_target event , on the opened activity type. Optionally configure the activity types from the on.pull_request_target.types field to customize when workflow trigger. Activity types such as reopened and synchronize can be useful. Example 2.2. Triggering the workflow on both opened and synchronize activity types on: pull_request_target: types: [opened, synchronize] Optionally configure the add_comment and add_status GitHub action inputs within example.yml . These inputs are sent to the Try in Web IDE GitHub action to customize whether comments and status checks are to be made. 2.3.2. Providing a devfile Providing a devfile in the root directory of the repository is recommended to define the development environment of the workspace created by the factory URL. In this way, the workspace contains everything users need to review pull requests, such as plugins, development commands, and other environment setup. The Che documentation repository devfile is an example of a well-defined and effective devfile. | [
"[](https:// <openshift_dev_spaces_fqdn> /#https:// <your_repository_url> )",
"name: Try in Web IDE example on: pull_request_target: types: [opened] jobs: add-link: runs-on: ubuntu-20.04 steps: - name: Web IDE Pull Request Check id: try-in-web-ide uses: redhat-actions/try-in-web-ide@v1 with: # GitHub action inputs # required github_token: USD{{ secrets.GITHUB_TOKEN }} # optional - defaults to true add_comment: true # optional - defaults to true add_status: true",
"on: pull_request_target: types: [opened, synchronize]"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.15/html/user_guide/using-devspaces-in-team-workflow |
Release notes for Red Hat build of OpenJDK 17.0.9 | Release notes for Red Hat build of OpenJDK 17.0.9 Red Hat build of OpenJDK 17 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.9/index |
2.3. Checking the Status of NetworkManager | 2.3. Checking the Status of NetworkManager To check whether NetworkManager is running: Note that the systemctl status command displays Active: inactive (dead) when NetworkManager is not running. | [
"~]USD systemctl status NetworkManager NetworkManager.service - Network Manager Loaded: loaded (/lib/systemd/system/NetworkManager.service; enabled) Active: active (running) since Fri, 08 Mar 2013 12:50:04 +0100; 3 days ago"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-Checking_the_Status_of_NetworkManager |
Chapter 10. ServiceAccount [v1] | Chapter 10. ServiceAccount [v1] Description ServiceAccount binds together: * a name, understood by users, and perhaps by peripheral systems, for an identity * a principal that can be authenticated and authorized * a set of secrets Type object 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether pods running as this service account should have an API token automatically mounted. Can be overridden at the pod level. imagePullSecrets array ImagePullSecrets is a list of references to secrets in the same namespace to use for pulling any images in pods that reference this ServiceAccount. ImagePullSecrets are distinct from Secrets because Secrets can be mounted in the pod, but ImagePullSecrets are only accessed by the kubelet. More info: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata secrets array Secrets is a list of the secrets in the same namespace that pods running using this ServiceAccount are allowed to use. Pods are only limited to this list if this service account has a "kubernetes.io/enforce-mountable-secrets" annotation set to "true". This field should not be used to find auto-generated service account token secrets for use outside of pods. Instead, tokens can be requested directly using the TokenRequest API, or service account token secrets can be manually created. More info: https://kubernetes.io/docs/concepts/configuration/secret secrets[] object ObjectReference contains enough information to let you inspect or modify the referred object. 10.1.1. .imagePullSecrets Description ImagePullSecrets is a list of references to secrets in the same namespace to use for pulling any images in pods that reference this ServiceAccount. ImagePullSecrets are distinct from Secrets because Secrets can be mounted in the pod, but ImagePullSecrets are only accessed by the kubelet. More info: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod Type array 10.1.2. .imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 10.1.3. .secrets Description Secrets is a list of the secrets in the same namespace that pods running using this ServiceAccount are allowed to use. Pods are only limited to this list if this service account has a "kubernetes.io/enforce-mountable-secrets" annotation set to "true". This field should not be used to find auto-generated service account token secrets for use outside of pods. Instead, tokens can be requested directly using the TokenRequest API, or service account token secrets can be manually created. More info: https://kubernetes.io/docs/concepts/configuration/secret Type array 10.1.4. .secrets[] Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 10.2. API endpoints The following API endpoints are available: /api/v1/serviceaccounts GET : list or watch objects of kind ServiceAccount /api/v1/watch/serviceaccounts GET : watch individual changes to a list of ServiceAccount. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/serviceaccounts DELETE : delete collection of ServiceAccount GET : list or watch objects of kind ServiceAccount POST : create a ServiceAccount /api/v1/watch/namespaces/{namespace}/serviceaccounts GET : watch individual changes to a list of ServiceAccount. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/serviceaccounts/{name} DELETE : delete a ServiceAccount GET : read the specified ServiceAccount PATCH : partially update the specified ServiceAccount PUT : replace the specified ServiceAccount /api/v1/watch/namespaces/{namespace}/serviceaccounts/{name} GET : watch changes to an object of kind ServiceAccount. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 10.2.1. /api/v1/serviceaccounts Table 10.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind ServiceAccount Table 10.2. HTTP responses HTTP code Reponse body 200 - OK ServiceAccountList schema 401 - Unauthorized Empty 10.2.2. /api/v1/watch/serviceaccounts Table 10.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ServiceAccount. deprecated: use the 'watch' parameter with a list operation instead. Table 10.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 10.2.3. /api/v1/namespaces/{namespace}/serviceaccounts Table 10.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 10.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ServiceAccount Table 10.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 10.8. Body parameters Parameter Type Description body DeleteOptions schema Table 10.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ServiceAccount Table 10.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 10.11. HTTP responses HTTP code Reponse body 200 - OK ServiceAccountList schema 401 - Unauthorized Empty HTTP method POST Description create a ServiceAccount Table 10.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.13. Body parameters Parameter Type Description body ServiceAccount schema Table 10.14. HTTP responses HTTP code Reponse body 200 - OK ServiceAccount schema 201 - Created ServiceAccount schema 202 - Accepted ServiceAccount schema 401 - Unauthorized Empty 10.2.4. /api/v1/watch/namespaces/{namespace}/serviceaccounts Table 10.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 10.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ServiceAccount. deprecated: use the 'watch' parameter with a list operation instead. Table 10.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 10.2.5. /api/v1/namespaces/{namespace}/serviceaccounts/{name} Table 10.18. Global path parameters Parameter Type Description name string name of the ServiceAccount namespace string object name and auth scope, such as for teams and projects Table 10.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ServiceAccount Table 10.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 10.21. Body parameters Parameter Type Description body DeleteOptions schema Table 10.22. HTTP responses HTTP code Reponse body 200 - OK ServiceAccount schema 202 - Accepted ServiceAccount schema 401 - Unauthorized Empty HTTP method GET Description read the specified ServiceAccount Table 10.23. HTTP responses HTTP code Reponse body 200 - OK ServiceAccount schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ServiceAccount Table 10.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 10.25. Body parameters Parameter Type Description body Patch schema Table 10.26. HTTP responses HTTP code Reponse body 200 - OK ServiceAccount schema 201 - Created ServiceAccount schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ServiceAccount Table 10.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.28. Body parameters Parameter Type Description body ServiceAccount schema Table 10.29. HTTP responses HTTP code Reponse body 200 - OK ServiceAccount schema 201 - Created ServiceAccount schema 401 - Unauthorized Empty 10.2.6. /api/v1/watch/namespaces/{namespace}/serviceaccounts/{name} Table 10.30. Global path parameters Parameter Type Description name string name of the ServiceAccount namespace string object name and auth scope, such as for teams and projects Table 10.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind ServiceAccount. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 10.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/security_apis/serviceaccount-v1 |
Chapter 2. Architectures | Chapter 2. Architectures Red Hat Enterprise Linux 7.6 is distributed with the kernel version 3.10.0-957, which provides support for the following architectures: [1] 64-bit AMD 64-bit Intel IBM POWER7+ (big endian) IBM POWER8 (big endian) [2] IBM POWER8 (little endian) [3] IBM POWER9 (little endian) [4] [5] IBM Z [4] [6] 64-bit ARM [4] [1] Note that the Red Hat Enterprise Linux 7.6 installation is supported only on 64-bit hardware. Red Hat Enterprise Linux 7.6 is able to run 32-bit operating systems, including versions of Red Hat Enterprise Linux, as virtual machines. [2] Red Hat Enterprise Linux 7.6 POWER8 (big endian) are currently supported as KVM guests on Red Hat Enterprise Linux 7.6 POWER8 systems that run the KVM hypervisor, and on PowerVM. [3] Red Hat Enterprise Linux 7.6 POWER8 (little endian) is currently supported as a KVM guest on Red Hat Enterprise Linux 7.6 POWER8 systems that run the KVM hypervisor, and on PowerVM. In addition, Red Hat Enterprise Linux 7.6 POWER8 (little endian) guests are supported on Red Hat Enterprise Linux 7.6 POWER9 systems that run the KVM hypervisor in POWER8-compatibility mode on version 4.14 kernel using the kernel-alt package. [4] This architecture is supported with the kernel version 4.14, provided by the kernel-alt packages. For details, see the Red Hat Enterprise Linux 7.5 . [5] Red Hat Enterprise Linux 7.6 POWER9 (little endian) is currently supported as a KVM guest on Red Hat Enterprise Linux 7.6 POWER9 systems that run the KVM hypervisor on version 4.14 kernel using the kernel-alt package, and on PowerVM. [6] Red Hat Enterprise Linux 7.6 for IBM Z (both the 3.10 kernel version and the 4.14 kernel version) is currently supported as a KVM guest on Red Hat Enterprise Linux 7.6 for IBM Z hosts that run the KVM hypervisor on version 4.14 kernel using the kernel-alt package. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/chap-Red_Hat_Enterprise_Linux-7.6_Release_Notes-Architectures |
Chapter 2. Commonly occurring error conditions | Chapter 2. Commonly occurring error conditions Most errors occur during Collector startup when Collector configures itself and finds or downloads a kernel driver for the system. The following diagram describes the main parts of Collector startup process: Figure 2.1. Collector pod startup process If any part of the startup procedure fails, the logs display a diagnostic summary detailing which steps succeeded or failed . The following log file example shows a successful startup: [INFO 2022/11/28 13:21:55] == Collector Startup Diagnostics: == [INFO 2022/11/28 13:21:55] Connected to Sensor? true [INFO 2022/11/28 13:21:55] Kernel driver available? true [INFO 2022/11/28 13:21:55] Driver loaded into kernel? true [INFO 2022/11/28 13:21:55] ==================================== The log output confirms that Collector connected to Sensor and located and loaded the kernel driver. You can use this log to check for the successful startup of Collector. 2.1. Unable to connect to the Sensor When starting, first check if you can connect to Sensor. Sensor is responsible for downloading kernel drivers and CIDR blocks for processing network events, making it an essential part of the startup process. The following logs indicate you are unable to connect to the Sensor: Collector Version: 3.15.0 OS: Ubuntu 20.04.4 LTS Kernel Version: 5.4.0-126-generic Starting StackRox Collector... [INFO 2023/05/13 12:20:43] Hostname: 'hostname' [...] [INFO 2023/05/13 12:20:43] Sensor configured at address: sensor.stackrox.svc:9998 [INFO 2023/05/13 12:20:43] Attempting to connect to Sensor [INFO 2023/05/13 12:21:13] [INFO 2023/05/13 12:21:13] == Collector Startup Diagnostics: == [INFO 2023/05/13 12:21:13] Connected to Sensor? false [INFO 2023/05/13 12:21:13] Kernel driver candidates: [INFO 2023/05/13 12:21:13] ==================================== [INFO 2023/05/13 12:21:13] [FATAL 2023/05/13 12:21:13] Unable to connect to Sensor. This error could mean that Sensor has not started correctly or that Collector configuration is incorrect. To fix this issue, you must verify Collector configuration to ensure that Sensor address is correct and that the Sensor pod is running correctly. View the Collector logs to specifically check the configured Sensor address. Alternatively, you can run the following command: USD kubectl -n stackrox get pod <collector_pod_name> -o jsonpath='{.spec.containers[0].env[?(@.name=="GRPC_SERVER")].value}' 1 1 For <collector_pod_name> , specify the name of your Collector pod, for example, collector-vclg5 . 2.2. Unavailability of the kernel driver Collector determines if it has a kernel driver for the node's kernel version. Collector first searches the local storage for a driver with the correct version and type, and then attempts to download the driver from Sensor. The following logs indicate that neither a local kernel driver nor a driver from Sensor is present: Collector Version: 3.15.0 OS: Alpine Linux v3.16 Kernel Version: 5.15.82-0-virt Starting StackRox Collector... [INFO 2023/05/30 12:00:33] Hostname: 'alpine' [INFO 2023/05/30 12:00:33] User configured collection-method=ebpf [INFO 2023/05/30 12:00:33] Afterglow is enabled [INFO 2023/05/30 12:00:33] Sensor configured at address: sensor.stackrox.svc:443 [INFO 2023/05/30 12:00:33] Attempting to connect to Sensor [INFO 2023/05/30 12:00:33] Successfully connected to Sensor. [INFO 2023/05/30 12:00:33] Module version: 2.5.0-rc1 [INFO 2023/05/30 12:00:33] Config: collection_method:0, useChiselCache:1, scrape_interval:30, turn_off_scrape:0, hostname:alpine, processesListeningOnPorts:1, logLevel:INFO [INFO 2023/05/30 12:00:33] Attempting to find eBPF probe - Candidate versions: [INFO 2023/05/30 12:00:33] collector-ebpf-5.15.82-0-virt.o [INFO 2023/05/30 12:00:33] Attempting to download collector-ebpf-5.15.82-0-virt.o [INFO 2023/05/30 12:00:33] Attempting to download kernel object from https://sensor.stackrox.svc:443/kernel-objects/2.5.0/collector-ebpf-5.15.82-0-virt.o.gz 1 [INFO 2023/05/30 12:00:33] HTTP Request failed with error code 404 2 [WARNING 2023/05/30 12:02:03] Attempted to download collector-ebpf-5.15.82-0-virt.o.gz 90 time(s) [WARNING 2023/05/30 12:02:03] Failed to download from collector-ebpf-5.15.82-0-virt.o.gz [WARNING 2023/05/30 12:02:03] Unable to download kernel object collector-ebpf-5.15.82-0-virt.o to /module/collector-ebpf.o.gz [WARNING 2023/05/30 12:02:03] No suitable kernel object downloaded for collector-ebpf-5.15.82-0-virt.o [ERROR 2023/05/30 12:02:03] Failed to initialize collector kernel components. [INFO 2023/05/30 12:02:03] [INFO 2023/05/30 12:02:03] == Collector Startup Diagnostics: == [INFO 2023/05/30 12:02:03] Connected to Sensor? true [INFO 2023/05/30 12:02:03] Kernel driver candidates: [INFO 2023/05/30 12:02:03] collector-ebpf-5.15.82-0-virt.o (unavailable) [INFO 2023/05/30 12:02:03] ==================================== [INFO 2023/05/30 12:02:03] [FATAL 2023/05/30 12:02:03] Failed to initialize collector kernel components. 3 1 The logs display attempts to locate the module first, followed by any efforts to download the driver from Sensor. 2 The 404 errors indicate that the node's kernel does not have a kernel driver. 3 As a result of missing a driver, Collector enters the CrashLoopBackOff state. The Kernel versions file contains a list of all supported kernel versions. 2.3. Failing to load the kernel driver Before Collector starts, it loads the kernel driver. However, in rare cases, you might encounter issues where Collector cannot load the kernel driver, resulting in various error messages or exceptions. In such cases, you must check the logs to identify the problems with failure in loading the kernel driver. Consider the following Collector log: [INFO 2023/05/13 14:25:13] Hostname: 'hostname' [...] [INFO 2023/05/13 14:25:13] Successfully downloaded and decompressed /module/collector.o [INFO 2023/05/13 14:25:13] [INFO 2023/05/13 14:25:13] This product uses ebpf subcomponents licensed under the GNU [INFO 2023/05/13 14:25:13] GENERAL PURPOSE LICENSE Version 2 outlined in the /kernel-modules/LICENSE file. [INFO 2023/05/13 14:25:13] Source code for the ebpf subcomponents is available at [INFO 2023/05/13 14:25:13] https://github.com/stackrox/falcosecurity-libs/ [INFO 2023/05/13 14:25:13] -- BEGIN PROG LOAD LOG -- [...] -- END PROG LOAD LOG -- [WARNING 2023/05/13 14:25:13] libscap: bpf_load_program() event=tracepoint/syscalls/sys_enter_chdir: Operation not permitted [ERROR 2023/05/13 14:25:13] Failed to setup collector-ebpf-6.2.0-20-generic.o [ERROR 2023/05/13 14:25:13] Failed to initialize collector kernel components. [INFO 2023/05/13 14:25:13] [INFO 2023/05/13 14:25:13] == Collector Startup Diagnostics: == [INFO 2023/05/13 14:25:13] Connected to Sensor? true [INFO 2023/05/13 14:25:13] Kernel driver candidates: [INFO 2023/05/13 14:25:13] collector-ebpf-6.2.0-20-generic.o (available) [INFO 2023/05/13 14:25:13] ==================================== [INFO 2023/05/13 14:25:13] [FATAL 2023/05/13 14:25:13] Failed to initialize collector kernel components. If you encounter this kind of error, it is unlikely that you can fix it yourself. So instead, report it to Red Hat Advanced Cluster Security for Kubernetes (RHACS) support team or create a GitHub issue . | [
"[INFO 2022/11/28 13:21:55] == Collector Startup Diagnostics: == [INFO 2022/11/28 13:21:55] Connected to Sensor? true [INFO 2022/11/28 13:21:55] Kernel driver available? true [INFO 2022/11/28 13:21:55] Driver loaded into kernel? true [INFO 2022/11/28 13:21:55] ====================================",
"Collector Version: 3.15.0 OS: Ubuntu 20.04.4 LTS Kernel Version: 5.4.0-126-generic Starting StackRox Collector [INFO 2023/05/13 12:20:43] Hostname: 'hostname' [...] [INFO 2023/05/13 12:20:43] Sensor configured at address: sensor.stackrox.svc:9998 [INFO 2023/05/13 12:20:43] Attempting to connect to Sensor [INFO 2023/05/13 12:21:13] [INFO 2023/05/13 12:21:13] == Collector Startup Diagnostics: == [INFO 2023/05/13 12:21:13] Connected to Sensor? false [INFO 2023/05/13 12:21:13] Kernel driver candidates: [INFO 2023/05/13 12:21:13] ==================================== [INFO 2023/05/13 12:21:13] [FATAL 2023/05/13 12:21:13] Unable to connect to Sensor.",
"kubectl -n stackrox get pod <collector_pod_name> -o jsonpath='{.spec.containers[0].env[?(@.name==\"GRPC_SERVER\")].value}' 1",
"Collector Version: 3.15.0 OS: Alpine Linux v3.16 Kernel Version: 5.15.82-0-virt Starting StackRox Collector [INFO 2023/05/30 12:00:33] Hostname: 'alpine' [INFO 2023/05/30 12:00:33] User configured collection-method=ebpf [INFO 2023/05/30 12:00:33] Afterglow is enabled [INFO 2023/05/30 12:00:33] Sensor configured at address: sensor.stackrox.svc:443 [INFO 2023/05/30 12:00:33] Attempting to connect to Sensor [INFO 2023/05/30 12:00:33] Successfully connected to Sensor. [INFO 2023/05/30 12:00:33] Module version: 2.5.0-rc1 [INFO 2023/05/30 12:00:33] Config: collection_method:0, useChiselCache:1, scrape_interval:30, turn_off_scrape:0, hostname:alpine, processesListeningOnPorts:1, logLevel:INFO [INFO 2023/05/30 12:00:33] Attempting to find eBPF probe - Candidate versions: [INFO 2023/05/30 12:00:33] collector-ebpf-5.15.82-0-virt.o [INFO 2023/05/30 12:00:33] Attempting to download collector-ebpf-5.15.82-0-virt.o [INFO 2023/05/30 12:00:33] Attempting to download kernel object from https://sensor.stackrox.svc:443/kernel-objects/2.5.0/collector-ebpf-5.15.82-0-virt.o.gz 1 [INFO 2023/05/30 12:00:33] HTTP Request failed with error code 404 2 [WARNING 2023/05/30 12:02:03] Attempted to download collector-ebpf-5.15.82-0-virt.o.gz 90 time(s) [WARNING 2023/05/30 12:02:03] Failed to download from collector-ebpf-5.15.82-0-virt.o.gz [WARNING 2023/05/30 12:02:03] Unable to download kernel object collector-ebpf-5.15.82-0-virt.o to /module/collector-ebpf.o.gz [WARNING 2023/05/30 12:02:03] No suitable kernel object downloaded for collector-ebpf-5.15.82-0-virt.o [ERROR 2023/05/30 12:02:03] Failed to initialize collector kernel components. [INFO 2023/05/30 12:02:03] [INFO 2023/05/30 12:02:03] == Collector Startup Diagnostics: == [INFO 2023/05/30 12:02:03] Connected to Sensor? true [INFO 2023/05/30 12:02:03] Kernel driver candidates: [INFO 2023/05/30 12:02:03] collector-ebpf-5.15.82-0-virt.o (unavailable) [INFO 2023/05/30 12:02:03] ==================================== [INFO 2023/05/30 12:02:03] [FATAL 2023/05/30 12:02:03] Failed to initialize collector kernel components. 3",
"[INFO 2023/05/13 14:25:13] Hostname: 'hostname' [...] [INFO 2023/05/13 14:25:13] Successfully downloaded and decompressed /module/collector.o [INFO 2023/05/13 14:25:13] [INFO 2023/05/13 14:25:13] This product uses ebpf subcomponents licensed under the GNU [INFO 2023/05/13 14:25:13] GENERAL PURPOSE LICENSE Version 2 outlined in the /kernel-modules/LICENSE file. [INFO 2023/05/13 14:25:13] Source code for the ebpf subcomponents is available at [INFO 2023/05/13 14:25:13] https://github.com/stackrox/falcosecurity-libs/ [INFO 2023/05/13 14:25:13] -- BEGIN PROG LOAD LOG -- [...] -- END PROG LOAD LOG -- [WARNING 2023/05/13 14:25:13] libscap: bpf_load_program() event=tracepoint/syscalls/sys_enter_chdir: Operation not permitted [ERROR 2023/05/13 14:25:13] Failed to setup collector-ebpf-6.2.0-20-generic.o [ERROR 2023/05/13 14:25:13] Failed to initialize collector kernel components. [INFO 2023/05/13 14:25:13] [INFO 2023/05/13 14:25:13] == Collector Startup Diagnostics: == [INFO 2023/05/13 14:25:13] Connected to Sensor? true [INFO 2023/05/13 14:25:13] Kernel driver candidates: [INFO 2023/05/13 14:25:13] collector-ebpf-6.2.0-20-generic.o (available) [INFO 2023/05/13 14:25:13] ==================================== [INFO 2023/05/13 14:25:13] [FATAL 2023/05/13 14:25:13] Failed to initialize collector kernel components."
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/troubleshooting_collector/commonly-occurring-error-conditions |
Chapter 5. Installing a cluster with RHEL KVM on IBM Z and IBM LinuxONE in a restricted network | Chapter 5. Installing a cluster with RHEL KVM on IBM Z and IBM LinuxONE in a restricted network In OpenShift Container Platform version 4.14, you can install a cluster on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision in a restricted network. Note While this document refers to only IBM Z(R), all information in it also applies to IBM(R) LinuxONE. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. You must move or remove any existing installation files, before you begin the installation process. This ensures that the required installation files are created and updated during the installation process. Important Ensure that installation steps are done from a machine with access to the installation media. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. You provisioned a RHEL Kernel Virtual Machine (KVM) system that is hosted on the logical partition (LPAR) and based on RHEL 8.6 or later. See Red Hat Enterprise Linux 8 and 9 Life Cycle . 5.2. About installations in restricted networks In OpenShift Container Platform 4.14, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 5.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 5.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 5.4. Machine requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. One or more KVM host machines based on RHEL 8.6 or later. Each RHEL KVM host machine must have libvirt installed and running. The virtual machines are provisioned under each RHEL KVM host machine. 5.4.1. Required machines The smallest OpenShift Container Platform clusters require the following hosts: Table 5.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To improve high availability of your cluster, distribute the control plane machines over different RHEL instances on at least two physical machines. The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. See Red Hat Enterprise Linux technology capabilities and limits . 5.4.2. Network connectivity requirements The OpenShift Container Platform installer creates the Ignition files, which are necessary for all the Red Hat Enterprise Linux CoreOS (RHCOS) virtual machines. The automated installation of OpenShift Container Platform is performed by the bootstrap machine. It starts the installation of OpenShift Container Platform on each node, starts the Kubernetes cluster, and then finishes. During this bootstrap, the virtual machine must have an established network connection either through a Dynamic Host Configuration Protocol (DHCP) server or static IP address. 5.4.3. IBM Z network connectivity requirements To install on IBM Z(R) under RHEL KVM, you need: A RHEL KVM host configured with an OSA or RoCE network adapter. Either a RHEL KVM host that is configured to use bridged networking in libvirt or MacVTap to connect the network to the guests. See Types of virtual network connections . 5.4.4. Host machine resource requirements The RHEL KVM host in your environment must meet the following requirements to host the virtual machines that you plan for the OpenShift Container Platform environment. See Getting started with virtualization . You can install OpenShift Container Platform version 4.14 on the following IBM(R) hardware: IBM(R) z16 (all models), IBM(R) z15 (all models), IBM(R) z14 (all models) IBM(R) LinuxONE 4 (all models), IBM(R) LinuxONE III (all models), IBM(R) LinuxONE Emperor II, IBM(R) LinuxONE Rockhopper II 5.4.5. Minimum IBM Z system environment Hardware requirements The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z(R). However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Important Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the OpenShift Container Platform clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. Operating system requirements One LPAR running on RHEL 8.6 or later with KVM, which is managed by libvirt On your RHEL KVM host, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine 5.4.6. Minimum resource requirements Each cluster virtual machine must meet the following minimum requirements: Virtual Machine Operating System vCPU [1] Virtual RAM Storage IOPS Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. 5.4.7. Preferred IBM Z system environment Hardware requirements Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. Two network connections to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Operating system requirements For high availability, two or three LPARs running on RHEL 8.6 or later with KVM, which are managed by libvirt. On your RHEL KVM host, set up: Three guest virtual machines for OpenShift Container Platform control plane machines, distributed across the RHEL KVM host machines. At least six guest virtual machines for OpenShift Container Platform compute machines, distributed across the RHEL KVM host machines. One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine. To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using cpu_shares . Do the same for infrastructure nodes, if they exist. See schedinfo in IBM(R) Documentation. 5.4.8. Preferred resource requirements The preferred requirements for each cluster virtual machine are: Virtual Machine Operating System vCPU Virtual RAM Storage Bootstrap RHCOS 4 16 GB 120 GB Control plane RHCOS 8 16 GB 120 GB Compute RHCOS 6 8 GB 120 GB 5.4.9. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources Recommended host practices for IBM Z(R) & IBM(R) LinuxONE environments 5.4.10. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 5.4.10.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 5.4.10.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Table 5.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 5.3. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 5.4. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 5.4.11. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 5.5. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 5.4.11.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 5.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 5.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 5.4.12. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 5.6. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 5.7. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 5.4.12.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 5.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 5.5. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Choose to perform either a fast track installation of Red Hat Enterprise Linux CoreOS (RHCOS) or a full installation of Red Hat Enterprise Linux CoreOS (RHCOS). For the full installation, you must set up an HTTP or HTTPS server to provide Ignition files and install images to the cluster nodes. For the fast track installation an HTTP or HTTPS server is not required, however, a DHCP server is required. See sections "Fast-track installation: Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines" and "Full installation: Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines". Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 5.6. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 5.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.8. Manually creating the installation configuration file Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z(R) 5.8.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z(R) infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 17 Add the additionalTrustBundle parameter and value. The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. 18 Provide the imageContentSources section according to the output of the command that you used to mirror the repository. Important When using the oc adm release mirror command, use the output from the imageContentSources section. When using oc mirror command, use the repositoryDigestMirrors section of the ImageContentSourcePolicy file that results from running the command. ImageContentSourcePolicy is deprecated. For more information see Configuring image registry repository mirroring . 5.8.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.8.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a minimal three node cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. Note The preferred resource for control plane nodes is six vCPUs and 21 GB. For three control plane nodes this is the memory + vCPU equivalent of a minimum five-node cluster. You should back the three nodes, each installed on a 120 GB disk, with three IFLs that are SMT2 enabled. The minimum tested setup is three vCPUs and 10 GB on a 120 GB disk for each control plane node. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 5.9. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 5.9.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 5.8. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 5.9. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. You can change this value by migrating from OpenShift SDN to OVN-Kubernetes. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 5.10. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 5.11. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 5.12. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 5.13. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 5.14. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 5.15. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 5.16. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 5.17. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 5.18. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 5.10. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 5.11. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z(R) infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) as Red Hat Enterprise Linux (RHEL) guest virtual machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. You can perform a fast-track installation of RHCOS that uses a prepackaged QEMU copy-on-write (QCOW2) disk image. Alternatively, you can perform a full installation on a new QCOW2 disk image. To add further security to your system, you can optionally install RHCOS using IBM(R) Secure Execution before proceeding to the fast-track installation. 5.11.1. Installing RHCOS using IBM Secure Execution Before you install RHCOS using IBM(R) Secure Execution, you must prepare the underlying infrastructure. Prerequisites IBM(R) z15 or later, or IBM(R) LinuxONE III or later. Red Hat Enterprise Linux (RHEL) 8 or later. You have a bootstrap Ignition file. The file is not protected, enabling others to view and edit it. You have verified that the boot image has not been altered after installation. You must run all your nodes as IBM(R) Secure Execution guests. Procedure Prepare your RHEL KVM host to support IBM(R) Secure Execution. By default, KVM hosts do not support guests in IBM(R) Secure Execution mode. To support guests in IBM(R) Secure Execution mode, KVM hosts must boot in LPAR mode with the kernel parameter specification prot_virt=1 . To enable prot_virt=1 on RHEL 8, follow these steps: Navigate to /boot/loader/entries/ to modify your bootloader configuration file *.conf . Add the kernel command line parameter prot_virt=1 . Run the zipl command and reboot your system. KVM hosts that successfully start with support for IBM(R) Secure Execution for Linux issue the following kernel message: prot_virt: Reserving <amount>MB as ultravisor base storage. To verify that the KVM host now supports IBM(R) Secure Execution, run the following command: # cat /sys/firmware/uv/prot_virt_host Example output 1 The value of this attribute is 1 for Linux instances that detect their environment as consistent with that of a secure host. For other instances, the value is 0. Add your host keys to the KVM guest via Ignition. During the first boot, RHCOS looks for your host keys to re-encrypt itself with them. RHCOS searches for files starting with ibm-z-hostkey- in the /etc/se-hostkeys directory. All host keys, for each machine the cluster is running on, must be loaded into the directory by the administrator. After first boot, you cannot run the VM on any other machines. Note You need to prepare your Ignition file on a safe system. For example, another IBM(R) Secure Execution guest. For example: { "ignition": { "version": "3.0.0" }, "storage": { "files": [ { "path": "/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt", "contents": { "source": "data:;base64,<base64 encoded hostkey document>" }, "mode": 420 }, { "path": "/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt", "contents": { "source": "data:;base64,<base64 encoded hostkey document>" }, "mode": 420 } ] } } ``` Note You can add as many host keys as required if you want your node to be able to run on multiple IBM Z(R) machines. To generate the Base64 encoded string, run the following command: base64 <your-hostkey>.crt Compared to guests not running IBM(R) Secure Execution, the first boot of the machine is longer because the entire image is encrypted with a randomly generated LUKS passphrase before the Ignition phase. Add Ignition protection To protect the secrets that are stored in the Ignition config file from being read or even modified, you must encrypt the Ignition config file. Note To achieve the desired security, Ignition logging and local login are disabled by default when running IBM(R) Secure Execution. Fetch the public GPG key for the secex-qemu.qcow2 image and encrypt the Ignition config with the key by running the following command: gpg --recipient-file /path/to/ignition.gpg.pub --yes --output /path/to/config.ign.gpg --verbose --armor --encrypt /path/to/config.ign Follow the fast-track installation of RHCOS to install nodes by using the IBM(R) Secure Execution QCOW image. Note Before you start the VM, replace serial=ignition with serial=ignition_crypted , and add the launchSecurity parameter. Verification When you have completed the fast-track installation of RHCOS and Ignition runs at the first boot, verify if decryption is successful. If the decryption is successful, you can expect an output similar to the following example: Example output [ 2.801433] systemd[1]: Starting coreos-ignition-setup-user.service - CoreOS Ignition User Config Setup... [ 2.803959] coreos-secex-ignition-decrypt[731]: gpg: key <key_name>: public key "Secure Execution (secex) 38.20230323.dev.0" imported [ 2.808874] coreos-secex-ignition-decrypt[740]: gpg: encrypted with rsa4096 key, ID <key_name>, created <yyyy-mm-dd> [ OK ] Finished coreos-secex-igni...S Secex Ignition Config Decryptor. If the decryption fails, you can expect an output similar to the following example: Example output Starting coreos-ignition-s...reOS Ignition User Config Setup... [ 2.863675] coreos-secex-ignition-decrypt[729]: gpg: key <key_name>: public key "Secure Execution (secex) 38.20230323.dev.0" imported [ 2.869178] coreos-secex-ignition-decrypt[738]: gpg: encrypted with RSA key, ID <key_name> [ 2.870347] coreos-secex-ignition-decrypt[738]: gpg: public key decryption failed: No secret key [ 2.870371] coreos-secex-ignition-decrypt[738]: gpg: decryption failed: No secret key Additional resources Introducing IBM(R) Secure Execution for Linux Linux as an IBM(R) Secure Execution host or guest Setting up IBM(R) Secure Execution on IBM Z 5.11.2. Configuring NBDE with static IP in an IBM Z or IBM LinuxONE environment Enabling NBDE disk encryption in an IBM Z(R) or IBM(R) LinuxONE environment requires additional steps, which are described in detail in this section. Prerequisites You have set up the External Tang Server. See Network-bound disk encryption for instructions. You have installed the butane utility. You have reviewed the instructions for how to create machine configs with Butane. Procedure Create Butane configuration files for the control plane and compute nodes. The following example of a Butane configuration for a control plane node creates a file named master-storage.bu for disk encryption: variant: openshift version: 4.14.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 2 1 The cipher option is only required if FIPS mode is enabled. Omit the entry if FIPS is disabled. 2 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Create a customized initramfs file to boot the machine, by running the following command: USD coreos-installer pxe customize \ /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img \ --dest-device /dev/disk/by-id/scsi-<serial-number> --dest-karg-append \ ip=<ip-address>::<gateway-ip>:<subnet-mask>::<network-device>:none \ --dest-karg-append nameserver=<nameserver-ip> \ --dest-karg-append rd.neednet=1 -o \ /root/rhcos-bootfiles/<Node-name>-initramfs.s390x.img Note Before first boot, you must customize the initramfs for each node in the cluster, and add PXE kernel parameters. Create a parameter file that includes ignition.platform.id=metal and ignition.firstboot . Example kernel parameter file for the control plane machine: rd.neednet=1 \ console=ttysclp0 \ ignition.firstboot ignition.platform.id=metal \ coreos.live.rootfs_url=http://10.19.17.25/redhat/ocp/rhcos-413.86.202302201445-0/rhcos-413.86.202302201445-0-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://bastion.ocp-cluster1.example.com:8080/ignition/master.ign \ ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 \ zfcp.allow_lun_scan=0 \ rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 Note Write all options in the parameter file as a single line and make sure you have no newline characters. Additional resources Creating machine configs with Butane 5.11.3. Fast-track installation by using a prepackaged QCOW2 disk image Complete the following steps to create the machines in a fast-track installation of Red Hat Enterprise Linux CoreOS (RHCOS), importing a prepackaged Red Hat Enterprise Linux CoreOS (RHCOS) QEMU copy-on-write (QCOW2) disk image. Prerequisites At least one LPAR running on RHEL 8.6 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. A domain name server (DNS) that can perform hostname and reverse lookup for the nodes. A DHCP server that provides IP addresses. Procedure Obtain the RHEL QEMU copy-on-write (QCOW2) disk image file from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate RHCOS QCOW2 image described in the following procedure. Download the QCOW2 disk image and Ignition files to a common directory on the RHEL KVM host. For example: /var/lib/libvirt/images Note The Ignition files are generated by the OpenShift Container Platform installer. Create a new disk image with the QCOW2 disk image backing file for each KVM guest node. USD qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size} Create the new KVM guest nodes using the Ignition file and the new disk image. USD virt-install --noautoconsole \ --connect qemu:///system \ --name {vm_name} \ --memory {memory} \ --vcpus {vcpus} \ --disk {disk} \ --launchSecurity type="s390-pv" \ 1 --import \ --network network={network},mac={mac} \ --disk path={ign_file},format=raw,readonly=on,serial=ignition,startup_policy=optional 2 1 If IBM(R) Secure Execution is enabled, add the launchSecurity type="s390-pv" parameter. 2 If IBM(R) Secure Execution is enabled, replace serial=ignition with serial=ignition_crypted . 5.11.4. Full installation on a new QCOW2 disk image Complete the following steps to create the machines in a full installation on a new QEMU copy-on-write (QCOW2) disk image. Prerequisites At least one LPAR running on RHEL 8.6 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. A domain name server (DNS) that can perform hostname and reverse lookup for the nodes. An HTTP or HTTPS server is set up. Procedure Obtain the RHEL kernel, initramfs, and rootfs files from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate RHCOS QCOW2 image described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Move the downloaded RHEL live kernel, initramfs, and rootfs as well as the Ignition files to an HTTP or HTTPS server before you launch virt-install . Note The Ignition files are generated by the OpenShift Container Platform installer. Create the new KVM guest nodes using the RHEL kernel, initramfs, and Ignition files, the new disk image, and adjusted parm line arguments. For --location , specify the location of the kernel/initrd on the HTTP or HTTPS server. For coreos.inst.ignition_url= , specify the Ignition file for the machine role. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. USD virt-install \ --connect qemu:///system \ --name {vm_name} \ --vcpus {vcpus} \ --memory {memory_mb} \ --disk {vm_name}.qcow2,size={image_size| default(10,true)} \ --network network={virt_network_parm} \ --boot hd \ --location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} \ --extra-args "rd.neednet=1 coreos.inst.install_dev=/dev/vda coreos.live.rootfs_url={rhcos_liveos} ip={ip}::{default_gateway}:{subnet_mask_length}:{vm_name}:enc1:none:{MTU} nameserver={dns} coreos.inst.ignition_url={rhcos_ign}" \ --noautoconsole \ --wait 5.11.5. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 5.11.5.1. Networking options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking on your RHCOS nodes for ISO installations. The examples describe how to use the ip= and nameserver= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= and nameserver= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page. The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 5.12. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 5.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 5.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 5.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m Configure the Operators that are not available. 5.15.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 5.15.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 5.15.2.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z(R). You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.14 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 5.15.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 5.16. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. Register your cluster on the Cluster registration page. Additional resources How to generate SOSREPORT within OpenShift Container Platform version 4 nodes without SSH . 5.17. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". βββ auth β βββ kubeadmin-password β βββ kubeconfig βββ bootstrap.ign βββ master.ign βββ metadata.json βββ worker.ign",
"prot_virt: Reserving <amount>MB as ultravisor base storage.",
"cat /sys/firmware/uv/prot_virt_host",
"1",
"{ \"ignition\": { \"version\": \"3.0.0\" }, \"storage\": { \"files\": [ { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 }, { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 } ] } } ```",
"base64 <your-hostkey>.crt",
"gpg --recipient-file /path/to/ignition.gpg.pub --yes --output /path/to/config.ign.gpg --verbose --armor --encrypt /path/to/config.ign",
"[ 2.801433] systemd[1]: Starting coreos-ignition-setup-user.service - CoreOS Ignition User Config Setup [ 2.803959] coreos-secex-ignition-decrypt[731]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.808874] coreos-secex-ignition-decrypt[740]: gpg: encrypted with rsa4096 key, ID <key_name>, created <yyyy-mm-dd> [ OK ] Finished coreos-secex-igni...S Secex Ignition Config Decryptor.",
"Starting coreos-ignition-s...reOS Ignition User Config Setup [ 2.863675] coreos-secex-ignition-decrypt[729]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.869178] coreos-secex-ignition-decrypt[738]: gpg: encrypted with RSA key, ID <key_name> [ 2.870347] coreos-secex-ignition-decrypt[738]: gpg: public key decryption failed: No secret key [ 2.870371] coreos-secex-ignition-decrypt[738]: gpg: decryption failed: No secret key",
"variant: openshift version: 4.14.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 2",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial-number> --dest-karg-append ip=<ip-address>::<gateway-ip>:<subnet-mask>::<network-device>:none --dest-karg-append nameserver=<nameserver-ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<Node-name>-initramfs.s390x.img",
"rd.neednet=1 console=ttysclp0 ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://10.19.17.25/redhat/ocp/rhcos-413.86.202302201445-0/rhcos-413.86.202302201445-0-live-rootfs.s390x.img coreos.inst.ignition_url=http://bastion.ocp-cluster1.example.com:8080/ignition/master.ign ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 zfcp.allow_lun_scan=0 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000",
"qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size}",
"virt-install --noautoconsole --connect qemu:///system --name {vm_name} --memory {memory} --vcpus {vcpus} --disk {disk} --launchSecurity type=\"s390-pv\" \\ 1 --import --network network={network},mac={mac} --disk path={ign_file},format=raw,readonly=on,serial=ignition,startup_policy=optional 2",
"virt-install --connect qemu:///system --name {vm_name} --vcpus {vcpus} --memory {memory_mb} --disk {vm_name}.qcow2,size={image_size| default(10,true)} --network network={virt_network_parm} --boot hd --location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} --extra-args \"rd.neednet=1 coreos.inst.install_dev=/dev/vda coreos.live.rootfs_url={rhcos_liveos} ip={ip}::{default_gateway}:{subnet_mask_length}:{vm_name}:enc1:none:{MTU} nameserver={dns} coreos.inst.ignition_url={rhcos_ign}\" --noautoconsole --wait",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.14 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_ibm_z_and_ibm_linuxone/installing-restricted-networks-ibm-z-kvm |
Index | Index Symbols USDcount sample usage local variables, Tracking I/O Time For Each File Read or Write USDreturn sample usage local variables, Summarizing Disk Read/Write Traffic , Track Cumulative IO @avg (integer extractor) computing for statistical aggregates array operations, Computing for Statistical Aggregates @count (integer extractor) computing for statistical aggregates array operations, Computing for Statistical Aggregates @max (integer extractor) computing for statistical aggregates array operations, Computing for Statistical Aggregates @min (integer extractor) computing for statistical aggregates array operations, Computing for Statistical Aggregates @sum (integer extractor) computing for statistical aggregates array operations, Computing for Statistical Aggregates A adding values to statistical aggregates computing for statistical aggregates array operations, Computing for Statistical Aggregates advantages of cross-instrumentation, Generating Instrumentation for Other Computers aggregate element not found runtime errors/warnings understanding SystemTap errors, Run Time Errors and Warnings aggregates (statistical) array operations, Computing for Statistical Aggregates aggregation overflow runtime errors/warnings understanding SystemTap errors, Run Time Errors and Warnings algebraic formulas using arrays reading values from arrays array operations, Reading Values From Arrays architecture notation, determining, Installing Required Kernel Information Packages architecture of SystemTap, Architecture array locals not supported parse/semantics error understanding SystemTap errors, Parse and Semantic Errors array operations assigning associated values, Assigning an Associated Value associating timestamps to process names, Assigning an Associated Value associative arrays, Array Operations in SystemTap clearing arrays/array elements, Clearing/Deleting Arrays and Array Elements delete operator, Clearing/Deleting Arrays and Array Elements multiple array operations within the same probe, Clearing/Deleting Arrays and Array Elements virtual file system reads (non-cumulative), tallying, Clearing/Deleting Arrays and Array Elements computing for statistical aggregates, Computing for Statistical Aggregates @avg (integer extractor), Computing for Statistical Aggregates @count (integer extractor), Computing for Statistical Aggregates @max (integer extractor), Computing for Statistical Aggregates @min (integer extractor), Computing for Statistical Aggregates @sum (integer extractor), Computing for Statistical Aggregates adding values to statistical aggregates, Computing for Statistical Aggregates count (operator), Computing for Statistical Aggregates extracting data collected by statistical aggregates, Computing for Statistical Aggregates conditional statements, using arrays in, Using Arrays in Conditional Statements testing for array membership, Using Arrays in Conditional Statements deleting arrays and array elements, Clearing/Deleting Arrays and Array Elements incrementing associated values, Incrementing Associated Values tallying virtual file system reads (VFS reads), Incrementing Associated Values multiple elements in an array, Processing Multiple Elements in an Array processing multiple elements in an array, Processing Multiple Elements in an Array cumulative virtual file system reads, tallying, Processing Multiple Elements in an Array foreach, Processing Multiple Elements in an Array iterations, processing elements in an array as, Processing Multiple Elements in an Array limiting the output of foreach, Processing Multiple Elements in an Array ordering the output of foreach, Processing Multiple Elements in an Array reading values from arrays, Reading Values From Arrays computing for timestamp deltas, Reading Values From Arrays empty unique keys, Reading Values From Arrays using arrays in simple computations, Reading Values From Arrays arrays, Associative Arrays (see also associative arrays) assigning associated values array operations, Assigning an Associated Value associating timestamps to process names, Assigning an Associated Value associating timestamps to process names array operations, Assigning an Associated Value associated values introduction arrays, Associative Arrays associating timestamps to process names assigning associated values array operations, Assigning an Associated Value associative arrays introduction, Associative Arrays associated values, Associative Arrays example, Associative Arrays index expression, Associative Arrays key pairs, Associative Arrays syntax, Associative Arrays unique keys, Associative Arrays asynchronous events Events, Event B begin Events, Event building instrumentation modules from SystemTap scripts, Generating Instrumentation for Other Computers building kernel modules from SystemTap scripts, Generating Instrumentation for Other Computers C call graph tracing examples of SystemTap scripts, Call Graph Tracing capabilities of SystemTap Introduction, SystemTap Capabilities changes to file attributes, monitoring examples of SystemTap scripts, Monitoring Changes to File Attributes clearing arrays/array elements array operations, Clearing/Deleting Arrays and Array Elements delete operator, Clearing/Deleting Arrays and Array Elements multiple array operations within the same probe, Clearing/Deleting Arrays and Array Elements virtual file system reads (non-cumulative), tallying, Clearing/Deleting Arrays and Array Elements command-line arguments SystemTap handler constructs handlers, Command-Line Arguments compiling instrumentation/kernel modules from SystemTap scripts, Generating Instrumentation for Other Computers components SystemTap scripts introduction, SystemTap Scripts computing for statistical aggregates array operations, Computing for Statistical Aggregates @avg (integer extractor), Computing for Statistical Aggregates @count (integer extractor), Computing for Statistical Aggregates @max (integer extractor), Computing for Statistical Aggregates @min (integer extractor), Computing for Statistical Aggregates @sum (integer extractor), Computing for Statistical Aggregates adding values to statistical aggregates, Computing for Statistical Aggregates count (operator), Computing for Statistical Aggregates extracting data collected by statistical aggregates, Computing for Statistical Aggregates computing for timestamp deltas reading values from arrays array operations, Reading Values From Arrays conditional operators conditional statements handlers, Conditional Statements conditional statements, using arrays in array operations, Using Arrays in Conditional Statements testing for array membership, Using Arrays in Conditional Statements CONFIG_HZ, computing for, Variables contended user-space locks (futex contentions), identifying examples of SystemTap scripts, Identifying Contended User-Space Locks copy fault runtime errors/warnings understanding SystemTap errors, Run Time Errors and Warnings count operator computing for statistical aggregates array (operator), Computing for Statistical Aggregates counting function calls examples of SystemTap scripts, Counting Function Calls Made CPU ticks examples of SystemTap scripts, Determining Time Spent in Kernel and User Space cpu() functions, Systemtap Handler/Body cross-compiling, Generating Instrumentation for Other Computers cross-instrumentation advantages of, Generating Instrumentation for Other Computers building kernel modules from SystemTap scripts, Generating Instrumentation for Other Computers configuration host system and target system, Generating Instrumentation for Other Computers generating instrumentation from SystemTap scripts, Generating Instrumentation for Other Computers host system, Generating Instrumentation for Other Computers instrumentation module, Generating Instrumentation for Other Computers target kernel, Generating Instrumentation for Other Computers target system, Generating Instrumentation for Other Computers ctime() functions, Systemtap Handler/Body ctime(), example of usage script examples, Summarizing Disk Read/Write Traffic cumulative I/O, tracking examples of SystemTap scripts, Track Cumulative IO cumulative virtual file system reads, tallying processing multiple elements in an array array operations, Processing Multiple Elements in an Array D delete operator clearing arrays/array elements array operations, Clearing/Deleting Arrays and Array Elements determining architecture notation, Installing Required Kernel Information Packages determining the kernel version, Installing Required Kernel Information Packages determining time spent in kernel and user space examples of SystemTap scripts, Determining Time Spent in Kernel and User Space device I/O, monitoring examples of SystemTap scripts, I/O Monitoring (By Device) device number of a file (integer format) examples of SystemTap scripts, Monitoring Reads and Writes to a File disk I/O traffic, summarizing script examples, Summarizing Disk Read/Write Traffic division by 0 runtime errors/warnings understanding SystemTap errors, Run Time Errors and Warnings documentation goals Introduction, Documentation Goals E embedded code in unprivileged script parse/semantics error understanding SystemTap errors, Parse and Semantic Errors empty unique keys reading values from arrays array operations, Reading Values From Arrays end Events, Event errors parse/semantics error, Parse and Semantic Errors embedded code in unprivileged script, Parse and Semantic Errors expected symbol/array index expression, Parse and Semantic Errors grammatical/typographical script error, Parse and Semantic Errors guru mode, Parse and Semantic Errors invalid values to variables/arrays, Parse and Semantic Errors libdwfl failure, Parse and Semantic Errors no match for probe point, Parse and Semantic Errors non-global arrays, Parse and Semantic Errors probe mismatch, Parse and Semantic Errors type mismatch for identifier, Parse and Semantic Errors unresolved function call, Parse and Semantic Errors unresolved target-symbol expression, Parse and Semantic Errors unresolved type for identifier, Parse and Semantic Errors variable modified during 'foreach', Parse and Semantic Errors runtime errors/warnings, Run Time Errors and Warnings aggregate element not found, Run Time Errors and Warnings aggregation overflow, Run Time Errors and Warnings copy fault, Run Time Errors and Warnings division by 0, Run Time Errors and Warnings MAXACTION exceeded, Run Time Errors and Warnings MAXNESTING exceeded, Run Time Errors and Warnings number of errors: N, skipped probes: M, Run Time Errors and Warnings pointer dereference fault, Run Time Errors and Warnings event types Understanding How SystemTap Works, Understanding How SystemTap Works Events asynchronous events, Event begin, Event end, Event examples of synchronous and asynchronous events, Event introduction, Event kernel.function("function"), Event kernel.trace("tracepoint"), Event module("module"), Event synchronous events, Event syscall.system_call, Event timer events, Event vfs.file_operation, Event wildcards, Event events and handlers, Understanding How SystemTap Works events wildcards, Event example introduction arrays, Associative Arrays example of multiple command-line arguments examples of SystemTap scripts, Call Graph Tracing examples of synchronous and asynchronous events Events, Event examples of SystemTap scripts, Useful SystemTap Scripts call graph tracing, Call Graph Tracing CPU ticks, Determining Time Spent in Kernel and User Space ctime(), example of usage, Summarizing Disk Read/Write Traffic determining time spent in kernel and user space, Determining Time Spent in Kernel and User Space file device number (integer format), Monitoring Reads and Writes to a File futex system call, Identifying Contended User-Space Locks identifying contended user-space locks (futex contentions), Identifying Contended User-Space Locks if/else conditionals, alternative syntax, Network Profiling inode number, Monitoring Reads and Writes to a File monitoring changes to file attributes, Monitoring Changes to File Attributes monitoring device I/O, I/O Monitoring (By Device) monitoring I/O time, Tracking I/O Time For Each File Read or Write monitoring incoming TCP connections, Monitoring Incoming TCP Connections monitoring polling applications, Monitoring Polling Applications monitoring reads and writes to a file, Monitoring Reads and Writes to a File monitoring system calls, Tracking Most Frequently Used System Calls monitoring system calls (volume per process), Tracking System Call Volume Per Process multiple command-line arguments, example of, Call Graph Tracing net/socket.c, tracing functions from, Tracing Functions Called in Network Socket Code network profiling, Network Profiling , Monitoring Network Packets Drops in Kernel process deadlocks (arising from futex contentions), Identifying Contended User-Space Locks stat -c, determining file device number (integer format), Monitoring Reads and Writes to a File stat -c, determining whole device number, I/O Monitoring (By Device) summarizing disk I/O traffic, Summarizing Disk Read/Write Traffic tallying function calls, Counting Function Calls Made thread_indent(), sample usage, Call Graph Tracing timer.ms(), sample usage, Counting Function Calls Made timer.s(), sample usage, Monitoring Polling Applications , Tracking Most Frequently Used System Calls tracing functions called in network socket code, Tracing Functions Called in Network Socket Code tracking cumulative I/O, Track Cumulative IO trigger function, Call Graph Tracing usrdev2kerndev(), I/O Monitoring (By Device) whole device number (usage as a command-line argument), I/O Monitoring (By Device) exceeded MAXACTION runtime errors/warnings understanding SystemTap errors, Run Time Errors and Warnings exceeded MAXNESTING runtime errors/warnings understanding SystemTap errors, Run Time Errors and Warnings exit() functions, Systemtap Handler/Body expected symbol/array index expression parse/semantics error understanding SystemTap errors, Parse and Semantic Errors extracting data collected by statistical aggregates computing for statistical aggregates array operations, Computing for Statistical Aggregates F file attributes, monitoring changes to examples of SystemTap scripts, Monitoring Changes to File Attributes file device number (integer format) examples of SystemTap scripts, Monitoring Reads and Writes to a File file reads/writes, monitoring examples of SystemTap scripts, Monitoring Reads and Writes to a File flight recorder mode, SystemTap Flight Recorder Mode file mode, File Flight Recorder in-memory mode, In-memory Flight Recorder for loops conditional statements handlers, Conditional Statements foreach processing multiple elements in an array array operations, Processing Multiple Elements in an Array format introduction arrays, Associative Arrays format and syntax printf(), Systemtap Handler/Body SystemTap handler constructs handlers, Variables SystemTap scripts introduction, SystemTap Scripts format specifiers printf(), Systemtap Handler/Body format strings printf(), Systemtap Handler/Body function call (unresolved) parse/semantics error understanding SystemTap errors, Parse and Semantic Errors function calls (incoming/outgoing), tracing examples of SystemTap scripts, Call Graph Tracing function calls, tallying examples of SystemTap scripts, Counting Function Calls Made functions, Systemtap Handler/Body cpu(), Systemtap Handler/Body ctime(), Systemtap Handler/Body gettimeofday_s(), Systemtap Handler/Body pp(), Systemtap Handler/Body SystemTap scripts introduction, SystemTap Scripts target(), Systemtap Handler/Body thread_indent(), Systemtap Handler/Body tid(), Systemtap Handler/Body uid(), Systemtap Handler/Body functions (used in handlers) exit(), Systemtap Handler/Body functions called in network socket code, tracing examples of SystemTap scripts, Tracing Functions Called in Network Socket Code futex contention, definition examples of SystemTap scripts, Identifying Contended User-Space Locks futex contentions, identifying examples of SystemTap scripts, Identifying Contended User-Space Locks futex system call examples of SystemTap scripts, Identifying Contended User-Space Locks G gettimeofday_s() functions, Systemtap Handler/Body global SystemTap handler constructs handlers, Variables goals, documentation Introduction, Documentation Goals grammatical/typographical script error parse/semantics error understanding SystemTap errors, Parse and Semantic Errors guru mode parse/semantics error understanding SystemTap errors, Parse and Semantic Errors H handler functions, Systemtap Handler/Body handlers conditional statements, Conditional Statements conditional operators, Conditional Statements for loops, Conditional Statements if/else, Conditional Statements while loops, Conditional Statements introduction, Systemtap Handler/Body SystemTap handler constructs, Basic SystemTap Handler Constructs command-line arguments, Command-Line Arguments global, Variables syntax and format, Basic SystemTap Handler Constructs variables, Variables handlers and events, Understanding How SystemTap Works SystemTap scripts introduction, SystemTap Scripts heaviest disk reads/writes, identifying script examples, Summarizing Disk Read/Write Traffic host system cross-instrumentation, Generating Instrumentation for Other Computers host system and target system cross-instrumentation configuration, Generating Instrumentation for Other Computers I I/O monitoring (by device) examples of SystemTap scripts, I/O Monitoring (By Device) I/O time, monitoring examples of SystemTap scripts, Tracking I/O Time For Each File Read or Write I/O traffic, summarizing script examples, Summarizing Disk Read/Write Traffic identifier type mismatch parse/semantics error understanding SystemTap errors, Parse and Semantic Errors identifying contended user-space locks (futex contentions) examples of SystemTap scripts, Identifying Contended User-Space Locks identifying heaviest disk reads/writes script examples, Summarizing Disk Read/Write Traffic if/else conditional statements handlers, Conditional Statements if/else conditionals, alternative syntax examples of SystemTap scripts, Network Profiling if/else statements, using arrays in array operations, Using Arrays in Conditional Statements incoming TCP connections, monitoring examples of SystemTap scripts, Monitoring Incoming TCP Connections incoming/outgoing function calls, tracing examples of SystemTap scripts, Call Graph Tracing incrementing associated values array operations, Incrementing Associated Values tallying virtual file system reads (VFS reads), Incrementing Associated Values index expression introduction arrays, Associative Arrays initial testing, Initial Testing inode number examples of SystemTap scripts, Monitoring Reads and Writes to a File Installation initial testing, Initial Testing kernel information packages, Installing Required Kernel Information Packages kernel version, determining the, Installing Required Kernel Information Packages required packages, Installing Required Kernel Information Packages Setup and Installation, Installation and Setup systemtap package, Installing SystemTap systemtap-runtime package, Installing SystemTap instrumentation module cross-instrumentation, Generating Instrumentation for Other Computers instrumentation modules from SystemTap scripts, building, Generating Instrumentation for Other Computers integer extractors computing for statistical aggregates array operations, Computing for Statistical Aggregates Introduction capabilities of SystemTap, SystemTap Capabilities documentation goals, Documentation Goals goals, documentation, Documentation Goals performance monitoring, Introduction invalid division runtime errors/warnings understanding SystemTap errors, Run Time Errors and Warnings invalid values to variables/arrays parse/semantics error understanding SystemTap errors, Parse and Semantic Errors iterations, processing elements in an array as processing multiple elements in an array array operations, Processing Multiple Elements in an Array K kernel and user space, determining time spent in examples of SystemTap scripts, Determining Time Spent in Kernel and User Space kernel information packages, Installing Required Kernel Information Packages kernel modules from SystemTap scripts, building, Generating Instrumentation for Other Computers kernel version, determining the, Installing Required Kernel Information Packages kernel.function("function") Events, Event kernel.trace("tracepoint") Events, Event key pairs introduction arrays, Associative Arrays L libdwfl failure parse/semantics error understanding SystemTap errors, Parse and Semantic Errors limiting the output of foreach processing multiple elements in an array array operations, Processing Multiple Elements in an Array local variables name, Systemtap Handler/Body sample usage USDcount, Tracking I/O Time For Each File Read or Write USDreturn, Summarizing Disk Read/Write Traffic , Track Cumulative IO M MAXACTION exceeded runtime errors/warnings understanding SystemTap errors, Run Time Errors and Warnings MAXNESTING exceeded runtime errors/warnings understanding SystemTap errors, Run Time Errors and Warnings membership (in array), testing for conditional statements, using arrays in array operations, Using Arrays in Conditional Statements module("module") Events, Event monitoring changes to file attributes examples of SystemTap scripts, Monitoring Changes to File Attributes monitoring cumulative I/O examples of SystemTap scripts, Track Cumulative IO monitoring device I/O examples of SystemTap scripts, I/O Monitoring (By Device) monitoring I/O time examples of SystemTap scripts, Tracking I/O Time For Each File Read or Write monitoring incoming TCP connections examples of SystemTap scripts, Monitoring Incoming TCP Connections monitoring polling applications examples of SystemTap scripts, Monitoring Polling Applications monitoring reads and writes to a file examples of SystemTap scripts, Monitoring Reads and Writes to a File monitoring system calls examples of SystemTap scripts, Tracking Most Frequently Used System Calls monitoring system calls (volume per process) examples of SystemTap scripts, Tracking System Call Volume Per Process multiple array operations within the same probe clearing arrays/array elements array operations, Clearing/Deleting Arrays and Array Elements multiple command-line arguments, example of examples of SystemTap scripts, Call Graph Tracing multiple elements in an array array operations, Processing Multiple Elements in an Array N name local variables, Systemtap Handler/Body net/socket.c, tracing functions from examples of SystemTap scripts, Tracing Functions Called in Network Socket Code network profiling examples of SystemTap scripts, Network Profiling , Monitoring Network Packets Drops in Kernel network socket code, tracing functions called in examples of SystemTap scripts, Tracing Functions Called in Network Socket Code network traffic, monitoring examples of SystemTap scripts, Network Profiling , Monitoring Network Packets Drops in Kernel no match for probe point parse/semantics error understanding SystemTap errors, Parse and Semantic Errors non-global arrays parse/semantics error understanding SystemTap errors, Parse and Semantic Errors number of errors: N, skipped probes: M runtime errors/warnings understanding SystemTap errors, Run Time Errors and Warnings O operations assigning associated values associating timestamps to process names, Assigning an Associated Value associative arrays, Array Operations in SystemTap clearing arrays/array elements, Clearing/Deleting Arrays and Array Elements delete operator, Clearing/Deleting Arrays and Array Elements multiple array operations within the same probe, Clearing/Deleting Arrays and Array Elements virtual file system reads (non-cumulative), tallying, Clearing/Deleting Arrays and Array Elements computing for statistical aggregates, Computing for Statistical Aggregates @avg (integer extractor), Computing for Statistical Aggregates @count (integer extractor), Computing for Statistical Aggregates @max (integer extractor), Computing for Statistical Aggregates @min (integer extractor), Computing for Statistical Aggregates @sum (integer extractor), Computing for Statistical Aggregates adding values to statistical aggregates, Computing for Statistical Aggregates count (operator), Computing for Statistical Aggregates extracting data collected by statistical aggregates, Computing for Statistical Aggregates conditional statements, using arrays in, Using Arrays in Conditional Statements testing for array membership, Using Arrays in Conditional Statements deleting arrays and array elements, Clearing/Deleting Arrays and Array Elements incrementing associated values, Incrementing Associated Values tallying virtual file system reads (VFS reads), Incrementing Associated Values multiple elements in an array, Processing Multiple Elements in an Array processing multiple elements in an array, Processing Multiple Elements in an Array cumulative virtual file system reads, tallying, Processing Multiple Elements in an Array foreach, Processing Multiple Elements in an Array iterations, processing elements in an array as, Processing Multiple Elements in an Array limiting the output of foreach, Processing Multiple Elements in an Array ordering the output of foreach, Processing Multiple Elements in an Array reading values from arrays, Reading Values From Arrays computing for timestamp deltas, Reading Values From Arrays empty unique keys, Reading Values From Arrays using arrays in simple computations, Reading Values From Arrays options, stap Usage, Running SystemTap Scripts ordering the output of foreach processing multiple elements in an array array operations, Processing Multiple Elements in an Array overflow of aggregation runtime errors/warnings understanding SystemTap errors, Run Time Errors and Warnings P packages required to run SystemTap, Installing Required Kernel Information Packages parse/semantics error understanding SystemTap errors, Parse and Semantic Errors embedded code in unprivileged script, Parse and Semantic Errors expected symbol/array index expression, Parse and Semantic Errors grammatical/typographical script error, Parse and Semantic Errors guru mode, Parse and Semantic Errors invalid values to variables/arrays, Parse and Semantic Errors libdwfl failure, Parse and Semantic Errors no match for probe point, Parse and Semantic Errors non-global arrays, Parse and Semantic Errors probe mismatch, Parse and Semantic Errors type mismatch for identifier, Parse and Semantic Errors unresolved function call, Parse and Semantic Errors unresolved target-symbol expression, Parse and Semantic Errors unresolved type for identifier, Parse and Semantic Errors variable modified during 'foreach', Parse and Semantic Errors performance monitoring Introduction, Introduction pointer dereference fault runtime errors/warnings understanding SystemTap errors, Run Time Errors and Warnings polling applications, monitoring examples of SystemTap scripts, Monitoring Polling Applications pp() functions, Systemtap Handler/Body printf() format specifiers, Systemtap Handler/Body format strings, Systemtap Handler/Body syntax and format, Systemtap Handler/Body printing I/O activity (cumulative) examples of SystemTap scripts, Track Cumulative IO probe mismatch parse/semantics error understanding SystemTap errors, Parse and Semantic Errors probe point (no match for) parse/semantics error understanding SystemTap errors, Parse and Semantic Errors probes SystemTap scripts introduction, SystemTap Scripts process deadlocks (arising from futex contentions) examples of SystemTap scripts, Identifying Contended User-Space Locks processing multiple elements in an array array operations, Processing Multiple Elements in an Array cumulative virtual file system reads, tallying array operations, Processing Multiple Elements in an Array foreach array operations, Processing Multiple Elements in an Array limiting the output of foreach array operations, Processing Multiple Elements in an Array ordering the output of foreach array operations, Processing Multiple Elements in an Array profiling the network examples of SystemTap scripts, Network Profiling , Monitoring Network Packets Drops in Kernel R reading values from arrays array operations, Reading Values From Arrays empty unique keys, Reading Values From Arrays using arrays in simple computations, Reading Values From Arrays computing for timestamp deltas array operations, Reading Values From Arrays reads/writes to a file, monitoring examples of SystemTap scripts, Monitoring Reads and Writes to a File required packages, Installing Required Kernel Information Packages RPMs required to run SystemTap, Installing Required Kernel Information Packages running scripts from standard input, Running SystemTap Scripts running SystemTap scripts Usage, Running SystemTap Scripts runtime errors/warnings understanding SystemTap errors, Run Time Errors and Warnings aggregate element not found, Run Time Errors and Warnings aggregation overflow, Run Time Errors and Warnings copy fault, Run Time Errors and Warnings division by 0, Run Time Errors and Warnings MAXACTION exceeded, Run Time Errors and Warnings MAXNESTING exceeded, Run Time Errors and Warnings number of errors: N, skipped probes: M, Run Time Errors and Warnings pointer dereference fault, Run Time Errors and Warnings S script examples call graph tracing, Call Graph Tracing CPU ticks, Determining Time Spent in Kernel and User Space ctime(), example of usage, Summarizing Disk Read/Write Traffic determining time spent in kernel and user space, Determining Time Spent in Kernel and User Space file device number (integer format), Monitoring Reads and Writes to a File futex system call, Identifying Contended User-Space Locks identifying contended user-space locks (futex contentions), Identifying Contended User-Space Locks if/else conditionals, alternative syntax, Network Profiling inode number, Monitoring Reads and Writes to a File monitoring changes to file attributes, Monitoring Changes to File Attributes monitoring device I/O, I/O Monitoring (By Device) monitoring I/O time, Tracking I/O Time For Each File Read or Write monitoring incoming TCP connections, Monitoring Incoming TCP Connections monitoring polling applications, Monitoring Polling Applications monitoring reads and writes to a file, Monitoring Reads and Writes to a File monitoring system calls, Tracking Most Frequently Used System Calls monitoring system calls (volume per process), Tracking System Call Volume Per Process multiple command-line arguments, example of, Call Graph Tracing net/socket.c, tracing functions from, Tracing Functions Called in Network Socket Code network profiling, Network Profiling , Monitoring Network Packets Drops in Kernel process deadlocks (arising from futex contentions), Identifying Contended User-Space Locks stat -c, determining file device number (integer format), Monitoring Reads and Writes to a File stat -c, determining whole device number, I/O Monitoring (By Device) summarizing disk I/O traffic, Summarizing Disk Read/Write Traffic tallying function calls, Counting Function Calls Made thread_indent(), sample usage, Call Graph Tracing timer.ms(), sample usage, Counting Function Calls Made timer.s(), sample usage, Monitoring Polling Applications , Tracking Most Frequently Used System Calls tracing functions called in network socket code, Tracing Functions Called in Network Socket Code tracking cumulative I/O, Track Cumulative IO trigger function, Call Graph Tracing usrdev2kerndev(), I/O Monitoring (By Device) whole device number (usage as a command-line argument), I/O Monitoring (By Device) scripts introduction, SystemTap Scripts components, SystemTap Scripts events and handlers, SystemTap Scripts format and syntax, SystemTap Scripts functions, SystemTap Scripts probes, SystemTap Scripts statement blocks, SystemTap Scripts sessions, SystemTap, Architecture Setup and Installation, Installation and Setup standard input, running scripts from Usage, Running SystemTap Scripts stap Usage, Running SystemTap Scripts stap options, Running SystemTap Scripts stapdev Usage, Running SystemTap Scripts staprun Usage, Running SystemTap Scripts stapusr Usage, Running SystemTap Scripts stat -c, determining file device number (integer format) examples of SystemTap scripts, Monitoring Reads and Writes to a File stat -c, determining whole device number examples of SystemTap scripts, I/O Monitoring (By Device) statement blocks SystemTap scripts introduction, SystemTap Scripts statistical aggregates array operations, Computing for Statistical Aggregates summarizing disk I/O traffic script examples, Summarizing Disk Read/Write Traffic synchronous events Events, Event syntax introduction arrays, Associative Arrays syntax and format printf(), Systemtap Handler/Body SystemTap handler constructs handlers, Basic SystemTap Handler Constructs SystemTap scripts introduction, SystemTap Scripts syscall.system_call Events, Event system calls volume (per process), monitoring examples of SystemTap scripts, Tracking System Call Volume Per Process system calls, monitoring examples of SystemTap scripts, Tracking Most Frequently Used System Calls SystemTap architecture, Architecture SystemTap handlers SystemTap handler constructs, Basic SystemTap Handler Constructs syntax and format, Basic SystemTap Handler Constructs systemtap package, Installing SystemTap SystemTap script functions, Systemtap Handler/Body SystemTap scripts introduction, SystemTap Scripts components, SystemTap Scripts events and handlers, SystemTap Scripts format and syntax, SystemTap Scripts functions, SystemTap Scripts probes, SystemTap Scripts statement blocks, SystemTap Scripts useful examples, Useful SystemTap Scripts SystemTap scripts, how to run, Running SystemTap Scripts SystemTap sessions, Architecture SystemTap statements conditional statements, Conditional Statements conditional operators, Conditional Statements for loops, Conditional Statements if/else, Conditional Statements while loops, Conditional Statements SystemTap handler constructs command-line arguments, Command-Line Arguments global, Variables variables, Variables systemtap-runtime package, Installing SystemTap systemtap-testsuite package sample scripts, Useful SystemTap Scripts T tallying function calls examples of SystemTap scripts, Counting Function Calls Made tallying virtual file system reads (VFS reads) incrementing associated values array operations, Incrementing Associated Values Tapsets definition of, Tapsets target kernel cross-instrumentation, Generating Instrumentation for Other Computers target system cross-instrumentation, Generating Instrumentation for Other Computers target system and host system configuration, Generating Instrumentation for Other Computers target() functions, Systemtap Handler/Body target-symbol expression, unresolved parse/semantics error understanding SystemTap errors, Parse and Semantic Errors TCP connections (incoming), monitoring examples of SystemTap scripts, Monitoring Incoming TCP Connections testing for array membership conditional statements, using arrays in array operations, Using Arrays in Conditional Statements testing, initial, Initial Testing thread_indent() functions, Systemtap Handler/Body thread_indent(), sample usage examples of SystemTap scripts, Call Graph Tracing tid() functions, Systemtap Handler/Body time of I/O examples of SystemTap scripts, Tracking I/O Time For Each File Read or Write time spent in kernel/user space, determining examples of SystemTap scripts, Determining Time Spent in Kernel and User Space timer events Events, Event timer.ms(), sample usage examples of SystemTap scripts, Counting Function Calls Made timer.s(), sample usage examples of SystemTap scripts, Monitoring Polling Applications , Tracking Most Frequently Used System Calls timestamp deltas, computing for reading values from arrays array operations, Reading Values From Arrays timestamps, association thereof to process names assigning associated values array operations, Assigning an Associated Value tracepoint, Event , Monitoring Network Packets Drops in Kernel tracing call graph examples of SystemTap scripts, Call Graph Tracing tracing functions called in network socket code examples of SystemTap scripts, Tracing Functions Called in Network Socket Code tracing incoming/outgoing function calls examples of SystemTap scripts, Call Graph Tracing tracking cumulative I/O examples of SystemTap scripts, Track Cumulative IO trigger function examples of SystemTap scripts, Call Graph Tracing type mismatch for identifier parse/semantics error understanding SystemTap errors, Parse and Semantic Errors typographical script error parse/semantics error understanding SystemTap errors, Parse and Semantic Errors U uid() functions, Systemtap Handler/Body uname -m, Installing Required Kernel Information Packages uname -r, Installing Required Kernel Information Packages Understanding How SystemTap Works, Understanding How SystemTap Works architecture, Architecture event types, Understanding How SystemTap Works events and handlers, Understanding How SystemTap Works SystemTap sessions, Architecture understanding SystemTap errors parse/semantics error, Parse and Semantic Errors embedded code in unprivileged script, Parse and Semantic Errors expected symbol/array index expression, Parse and Semantic Errors grammatical/typographical script error, Parse and Semantic Errors guru mode, Parse and Semantic Errors invalid values to variables/arrays, Parse and Semantic Errors libdwfl failure, Parse and Semantic Errors no match for probe point, Parse and Semantic Errors non-global arrays, Parse and Semantic Errors probe mismatch, Parse and Semantic Errors type mismatch for identifier, Parse and Semantic Errors unresolved function call, Parse and Semantic Errors unresolved target-symbol expression, Parse and Semantic Errors unresolved type for identifier, Parse and Semantic Errors variable modified during 'foreach', Parse and Semantic Errors runtime errors/warnings, Run Time Errors and Warnings aggregate element not found, Run Time Errors and Warnings aggregation overflow, Run Time Errors and Warnings copy fault, Run Time Errors and Warnings division by 0, Run Time Errors and Warnings MAXACTION exceeded, Run Time Errors and Warnings MAXNESTING exceeded, Run Time Errors and Warnings number of errors: N, skipped probes: M, Run Time Errors and Warnings pointer dereference fault, Run Time Errors and Warnings unique keys introduction arrays, Associative Arrays unprivileged script, embedded code in parse/semantics error understanding SystemTap errors, Parse and Semantic Errors unresolved function call parse/semantics error understanding SystemTap errors, Parse and Semantic Errors unresolved target-symbol expression parse/semantics error understanding SystemTap errors, Parse and Semantic Errors unresolved type for identifier parse/semantics error understanding SystemTap errors, Parse and Semantic Errors unsafe embedded code in unprivileged script parse/semantics error understanding SystemTap errors, Parse and Semantic Errors Usage options, stap, Running SystemTap Scripts running SystemTap scripts, Running SystemTap Scripts standard input, running scripts from, Running SystemTap Scripts stap, Running SystemTap Scripts stapdev, Running SystemTap Scripts staprun, Running SystemTap Scripts stapusr, Running SystemTap Scripts useful examples of SystemTap scripts, Useful SystemTap Scripts user and kernel space, determining time spent in examples of SystemTap scripts, Determining Time Spent in Kernel and User Space using arrays in simple computations reading values from arrays array operations, Reading Values From Arrays Using SystemTap, Using SystemTap usrdev2kerndev() examples of SystemTap scripts, I/O Monitoring (By Device) V values, assignment of array operations, Assigning an Associated Value variable modified during 'foreach' parse/semantics error understanding SystemTap errors, Parse and Semantic Errors variables SystemTap handler constructs handlers, Variables variables (local) name, Systemtap Handler/Body sample usage USDcount, Tracking I/O Time For Each File Read or Write USDreturn, Summarizing Disk Read/Write Traffic , Track Cumulative IO VFS reads, tallying of incrementing associated values array operations, Incrementing Associated Values vfs.file_operation Events, Event virtual file system reads (cumulative), tallying processing multiple elements in an array array operations, Processing Multiple Elements in an Array virtual file system reads (non-cumulative), tallying clearing arrays/array elements array operations, Clearing/Deleting Arrays and Array Elements W while loops conditional statements handlers, Conditional Statements whole device number (usage as a command-line argument) examples of SystemTap scripts, I/O Monitoring (By Device) wildcards in events, Event writes/reads to a file, monitoring examples of SystemTap scripts, Monitoring Reads and Writes to a File | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_beginners_guide/ix01 |
Chapter 47. JMS | Chapter 47. JMS Both producer and consumer are supported This component allows messages to be sent to (or consumed from) a JMS Queue or Topic. It uses Spring's JMS support for declarative transactions, including Spring's JmsTemplate for sending and a MessageListenerContainer for consuming. 47.1. Dependencies When using jms with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jms-starter</artifactId> </dependency> Note Using ActiveMQ If you are using Apache ActiveMQ , you should prefer the ActiveMQ component as it has been optimized for ActiveMQ. All of the options and samples on this page are also valid for the ActiveMQ component. Note Transacted and caching See section Transactions and Cache Levels below if you are using transactions with JMS as it can impact performance. Note Request/Reply over JMS Make sure to read the section Request-reply over JMS further below on this page for important notes about request/reply, as Camel offers a number of options to configure for performance, and clustered environments. 47.2. URI format Where destinationName is a JMS queue or topic name. By default, the destinationName is interpreted as a queue name. For example, to connect to the queue, FOO.BAR use: You can include the optional queue: prefix, if you prefer: To connect to a topic, you must include the topic: prefix. For example, to connect to the topic, Stocks.Prices , use: You append query options to the URI by using the following format, ?option=value&option=value&... 47.2.1. Using ActiveMQ The JMS component reuses Spring 2's JmsTemplate for sending messages. This is not ideal for use in a non-J2EE container and typically requires some caching in the JMS provider to avoid poor performance . If you intend to use Apache ActiveMQ as your message broker, the recommendation is that you do one of the following: Use the ActiveMQ component, which is already optimized to use ActiveMQ efficiently Use the PoolingConnectionFactory in ActiveMQ. 47.2.2. Transactions and Cache Levels If you are consuming messages and using transactions ( transacted=true ) then the default settings for cache level can impact performance. If you are using XA transactions then you cannot cache as it can cause the XA transaction to not work properly. If you are not using XA, then you should consider caching as it speeds up performance, such as setting cacheLevelName=CACHE_CONSUMER . The default setting for cacheLevelName is CACHE_AUTO . This default auto detects the mode and sets the cache level accordingly to: CACHE_CONSUMER if transacted=false CACHE_NONE if transacted=true So you can say the default setting is conservative. Consider using cacheLevelName=CACHE_CONSUMER if you are using non-XA transactions. 47.2.3. Durable Subscriptions If you wish to use durable topic subscriptions, you need to specify both clientId and durableSubscriptionName . The value of the clientId must be unique and can only be used by a single JMS connection instance in your entire network. You may prefer to use Virtual Topics instead to avoid this limitation. More background on durable messaging here . 47.2.4. Message Header Mapping When using message headers, the JMS specification states that header names must be valid Java identifiers. So try to name your headers to be valid Java identifiers. One benefit of doing this is that you can then use your headers inside a JMS Selector (whose SQL92 syntax mandates Java identifier syntax for headers). A simple strategy for mapping header names is used by default. The strategy is to replace any dots and hyphens in the header name as shown below and to reverse the replacement when the header name is restored from a JMS message sent over the wire. What does this mean? No more losing method names to invoke on a bean component, no more losing the filename header for the File Component, and so on. The current header name strategy for accepting header names in Camel is as follows: Dots are replaced by `DOT` and the replacement is reversed when Camel consume the message Hyphen is replaced by `HYPHEN` and the replacement is reversed when Camel consumes the message You can configure many different properties on the JMS endpoint, which map to properties on the JMSConfiguration object. Note Mapping to Spring JMS Many of these properties map to properties on Spring JMS, which Camel uses for sending and receiving messages. So you can get more information about these properties by consulting the relevant Spring documentation. 47.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 47.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 47.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 47.4. Component Options The JMS component supports 98 options, which are listed below. Name Description Default Type clientId (common) Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions. If using Apache ActiveMQ you may prefer to use Virtual Topics instead. String connectionFactory (common) The connection factory to be use. A connection factory must be configured either on the component or endpoint. ConnectionFactory disableReplyTo (common) Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another. false boolean durableSubscriptionName (common) The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well. String jmsMessageType (common) Allows you to force the use of a specific javax.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it. Enum values: Bytes Map Object Stream Text JmsMessageType replyTo (common) Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer). String testConnectionOnStartup (common) Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. false boolean acknowledgementModeName (consumer) The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE. Enum values: SESSION_TRANSACTED CLIENT_ACKNOWLEDGE AUTO_ACKNOWLEDGE DUPS_OK_ACKNOWLEDGE AUTO_ACKNOWLEDGE String artemisConsumerPriority (consumer) Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer). int asyncConsumer (consumer) Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the message from the JMS queue, while the message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions). false boolean autoStartup (consumer) Specifies whether the consumer container should auto-startup. true boolean cacheLevel (consumer) Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details. int cacheLevelName (consumer) Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE_AUTO, CACHE_CONNECTION, CACHE_CONSUMER, CACHE_NONE, and CACHE_SESSION. The default setting is CACHE_AUTO. See the Spring documentation and Transactions Cache Levels for more information. Enum values: CACHE_AUTO CACHE_CONNECTION CACHE_CONSUMER CACHE_NONE CACHE_SESSION CACHE_AUTO String concurrentConsumers (consumer) Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. 1 int maxConcurrentConsumers (consumer) Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. int replyToDeliveryPersistent (consumer) Specifies whether to use persistent delivery by default for replies. true boolean selector (consumer) Sets the JMS selector to use. String subscriptionDurable (consumer) Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. false boolean subscriptionName (consumer) Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0). String subscriptionShared (consumer) Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker. false boolean acceptMessagesWhileStopping (consumer (advanced)) Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option. false boolean allowReplyManagerQuickStop (consumer (advanced)) Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag. false boolean consumerType (consumer (advanced)) The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. Enum values: Simple Default Custom Default ConsumerType defaultTaskExecutorType (consumer (advanced)) Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached threadpool-like). If not set, it defaults to the behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers. Enum values: ThreadPool SimpleAsync DefaultTaskExecutorType eagerLoadingOfProperties (consumer (advanced)) Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody. false boolean eagerPoisonBody (consumer (advanced)) If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties. Poison JMS message due to USD\{exception.message} String exposeListenerSession (consumer (advanced)) Specifies whether the listener session should be exposed when consuming messages. false boolean replyToSameDestinationAllowed (consumer (advanced)) Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself. false boolean taskExecutor (consumer (advanced)) Allows you to specify a custom task executor for consuming messages. TaskExecutor deliveryDelay (producer) Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker. -1 long deliveryMode (producer) Specifies the delivery mode to be used. Possible values are those defined by javax.jms.DeliveryMode. NON_PERSISTENT = 1 and PERSISTENT = 2. Enum values: 1 2 Integer deliveryPersistent (producer) Specifies whether persistent delivery is used by default. true boolean explicitQosEnabled (producer) Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers. false Boolean formatDateHeadersToIso8601 (producer) Sets whether JMS date properties should be formatted according to the ISO 8601 standard. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean preserveMessageQos (producer) Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header. false boolean priority (producer) Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect. Enum values: 1 2 3 4 5 6 7 8 9 4 int replyToConcurrentConsumers (producer) Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. 1 int replyToMaxConcurrentConsumers (producer) Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. int replyToOnTimeoutMaxConcurrentConsumers (producer) Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS. 1 int replyToOverride (producer) Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination. String replyToType (producer) Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive. Enum values: Temporary Shared Exclusive ReplyToType requestTimeout (producer) The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option. 20000 long timeToLive (producer) When sending messages, specifies the time-to-live of the message (in milliseconds). -1 long allowAdditionalHeaders (producer (advanced)) This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example some message systems such as WMQ do this with header names using prefix JMS_IBM_MQMD_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching. String allowNullBody (producer (advanced)) Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. true boolean alwaysCopyMessage (producer (advanced)) If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set). false boolean correlationProperty (producer (advanced)) When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel. String disableTimeToLive (producer (advanced)) Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details. false boolean forceSendOriginalMessage (producer (advanced)) When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received. false boolean includeSentJMSMessageID (producer (advanced)) Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination. false boolean replyToCacheLevelName (producer (advanced)) Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE_NONE to work. Note: If using temporary queues then CACHE_NONE is not allowed, and you must use a higher value such as CACHE_CONSUMER or CACHE_SESSION. Enum values: CACHE_AUTO CACHE_CONNECTION CACHE_CONSUMER CACHE_NONE CACHE_SESSION String replyToDestinationSelectorName (producer (advanced)) Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue). String streamMessageTypeEnabled (producer (advanced)) Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data. false boolean allowAutoWiredConnectionFactory (advanced) Whether to auto-discover ConnectionFactory from the registry, if no connection factory has been configured. If only one instance of ConnectionFactory is found then it will be used. This is enabled by default. true boolean allowAutoWiredDestinationResolver (advanced) Whether to auto-discover DestinationResolver from the registry, if no destination resolver has been configured. If only one instance of DestinationResolver is found then it will be used. This is enabled by default. true boolean allowSerializedHeaders (advanced) Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false boolean artemisStreamingEnabled (advanced) Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used. false boolean asyncStartListener (advanced) Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. false boolean asyncStopListener (advanced) Whether to stop the JmsConsumer message listener asynchronously, when stopping a route. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean configuration (advanced) To use a shared JMS configuration. JmsConfiguration destinationResolver (advanced) A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry). DestinationResolver errorHandler (advanced) Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler. ErrorHandler exceptionListener (advanced) Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. ExceptionListener idleConsumerLimit (advanced) Specify the limit for the number of consumers that are allowed to be idle at any given time. 1 int idleTaskExecutionLimit (advanced) Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring. 1 int includeAllJMSXProperties (advanced) Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. false boolean jmsKeyFormatStrategy (advanced) Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. Enum values: default passthrough JmsKeyFormatStrategy mapJmsMessage (advanced) Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. true boolean maxMessagesPerTask (advanced) The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required. -1 int messageConverter (advanced) To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a javax.jms.Message. MessageConverter messageCreatedStrategy (advanced) To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. MessageCreatedStrategy messageIdEnabled (advanced) When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value. true boolean messageListenerContainerFactory (advanced) Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom. MessageListenerContainerFactory messageTimestampEnabled (advanced) Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value. true boolean pubSubNoLocal (advanced) Specifies whether to inhibit the delivery of messages published by its own connection. false boolean queueBrowseStrategy (advanced) To use a custom QueueBrowseStrategy when browsing queues. QueueBrowseStrategy receiveTimeout (advanced) The timeout for receiving messages (in milliseconds). 1000 long recoveryInterval (advanced) Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds. 5000 long requestTimeoutCheckerInterval (advanced) Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout. 1000 long synchronous (advanced) Sets whether synchronous processing should be strictly used. false boolean transferException (advanced) If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a javax.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!. false boolean transferExchange (advanced) You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!. false boolean useMessageIDAsCorrelationID (advanced) Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages. false boolean waitForProvisionCorrelationToBeUpdatedCounter (advanced) Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled. 50 int waitForProvisionCorrelationToBeUpdatedThreadSleepingTime (advanced) Interval in millis to sleep each time while waiting for provisional correlation id to be updated. 100 long headerFilterStrategy (filter) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy errorHandlerLoggingLevel (logging) Allows to configure the default errorHandler logging level for logging uncaught exceptions. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel errorHandlerLogStackTrace (logging) Allows to control whether stacktraces should be logged or not, by the default errorHandler. true boolean password (security) Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String username (security) Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String transacted (transaction) Specifies whether to use transacted mode. false boolean transactedInOut (transaction) Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction. false boolean lazyCreateTransactionManager (transaction (advanced)) If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true. true boolean transactionManager (transaction (advanced)) The Spring transaction manager to use. PlatformTransactionManager transactionName (transaction (advanced)) The name of the transaction to use. String transactionTimeout (transaction (advanced)) The timeout value of the transaction (in seconds), if using transacted mode. -1 int 47.5. Endpoint Options The JMS endpoint is configured using URI syntax: with the following path and query parameters: 47.5.1. Path Parameters (2 parameters) Name Description Default Type destinationType (common) The kind of destination to use. Enum values: queue topic temp-queue temp-topic queue String destinationName (common) Required Name of the queue or topic to use as destination. String 47.5.2. Query Parameters (95 parameters) Name Description Default Type clientId (common) Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions. If using Apache ActiveMQ you may prefer to use Virtual Topics instead. String connectionFactory (common) The connection factory to be use. A connection factory must be configured either on the component or endpoint. ConnectionFactory disableReplyTo (common) Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another. false boolean durableSubscriptionName (common) The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well. String jmsMessageType (common) Allows you to force the use of a specific javax.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it. Enum values: Bytes Map Object Stream Text JmsMessageType replyTo (common) Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer). String testConnectionOnStartup (common) Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. false boolean acknowledgementModeName (consumer) The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE. Enum values: SESSION_TRANSACTED CLIENT_ACKNOWLEDGE AUTO_ACKNOWLEDGE DUPS_OK_ACKNOWLEDGE AUTO_ACKNOWLEDGE String artemisConsumerPriority (consumer) Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer). int asyncConsumer (consumer) Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the message from the JMS queue, while the message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions). false boolean autoStartup (consumer) Specifies whether the consumer container should auto-startup. true boolean cacheLevel (consumer) Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details. int cacheLevelName (consumer) Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE_AUTO, CACHE_CONNECTION, CACHE_CONSUMER, CACHE_NONE, and CACHE_SESSION. The default setting is CACHE_AUTO. See the Spring documentation and Transactions Cache Levels for more information. Enum values: CACHE_AUTO CACHE_CONNECTION CACHE_CONSUMER CACHE_NONE CACHE_SESSION CACHE_AUTO String concurrentConsumers (consumer) Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. 1 int maxConcurrentConsumers (consumer) Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. int replyToDeliveryPersistent (consumer) Specifies whether to use persistent delivery by default for replies. true boolean selector (consumer) Sets the JMS selector to use. String subscriptionDurable (consumer) Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. false boolean subscriptionName (consumer) Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0). String subscriptionShared (consumer) Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker. false boolean acceptMessagesWhileStopping (consumer (advanced)) Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option. false boolean allowReplyManagerQuickStop (consumer (advanced)) Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag. false boolean consumerType (consumer (advanced)) The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. Enum values: Simple Default Custom Default ConsumerType defaultTaskExecutorType (consumer (advanced)) Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached threadpool-like). If not set, it defaults to the behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers. Enum values: ThreadPool SimpleAsync DefaultTaskExecutorType eagerLoadingOfProperties (consumer (advanced)) Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody. false boolean eagerPoisonBody (consumer (advanced)) If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties. Poison JMS message due to USD\{exception.message} String exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern exposeListenerSession (consumer (advanced)) Specifies whether the listener session should be exposed when consuming messages. false boolean replyToSameDestinationAllowed (consumer (advanced)) Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself. false boolean taskExecutor (consumer (advanced)) Allows you to specify a custom task executor for consuming messages. TaskExecutor deliveryDelay (producer) Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker. -1 long deliveryMode (producer) Specifies the delivery mode to be used. Possible values are those defined by javax.jms.DeliveryMode. NON_PERSISTENT = 1 and PERSISTENT = 2. Enum values: 1 2 Integer deliveryPersistent (producer) Specifies whether persistent delivery is used by default. true boolean explicitQosEnabled (producer) Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers. false Boolean formatDateHeadersToIso8601 (producer) Sets whether JMS date properties should be formatted according to the ISO 8601 standard. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean preserveMessageQos (producer) Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header. false boolean priority (producer) Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect. Enum values: 1 2 3 4 5 6 7 8 9 4 int replyToConcurrentConsumers (producer) Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. 1 int replyToMaxConcurrentConsumers (producer) Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. int replyToOnTimeoutMaxConcurrentConsumers (producer) Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS. 1 int replyToOverride (producer) Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination. String replyToType (producer) Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive. Enum values: Temporary Shared Exclusive ReplyToType requestTimeout (producer) The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option. 20000 long timeToLive (producer) When sending messages, specifies the time-to-live of the message (in milliseconds). -1 long allowAdditionalHeaders (producer (advanced)) This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example some message systems such as WMQ do this with header names using prefix JMS_IBM_MQMD_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching. String allowNullBody (producer (advanced)) Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. true boolean alwaysCopyMessage (producer (advanced)) If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set). false boolean correlationProperty (producer (advanced)) When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel. String disableTimeToLive (producer (advanced)) Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details. false boolean forceSendOriginalMessage (producer (advanced)) When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received. false boolean includeSentJMSMessageID (producer (advanced)) Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination. false boolean replyToCacheLevelName (producer (advanced)) Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE_NONE to work. Note: If using temporary queues then CACHE_NONE is not allowed, and you must use a higher value such as CACHE_CONSUMER or CACHE_SESSION. Enum values: CACHE_AUTO CACHE_CONNECTION CACHE_CONSUMER CACHE_NONE CACHE_SESSION String replyToDestinationSelectorName (producer (advanced)) Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue). String streamMessageTypeEnabled (producer (advanced)) Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data. false boolean allowSerializedHeaders (advanced) Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false boolean artemisStreamingEnabled (advanced) Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used. false boolean asyncStartListener (advanced) Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. false boolean asyncStopListener (advanced) Whether to stop the JmsConsumer message listener asynchronously, when stopping a route. false boolean destinationResolver (advanced) A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry). DestinationResolver errorHandler (advanced) Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler. ErrorHandler exceptionListener (advanced) Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. ExceptionListener headerFilterStrategy (advanced) To use a custom HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy idleConsumerLimit (advanced) Specify the limit for the number of consumers that are allowed to be idle at any given time. 1 int idleTaskExecutionLimit (advanced) Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring. 1 int includeAllJMSXProperties (advanced) Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. false boolean jmsKeyFormatStrategy (advanced) Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. Enum values: default passthrough JmsKeyFormatStrategy mapJmsMessage (advanced) Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. true boolean maxMessagesPerTask (advanced) The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required. -1 int messageConverter (advanced) To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a javax.jms.Message. MessageConverter messageCreatedStrategy (advanced) To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. MessageCreatedStrategy messageIdEnabled (advanced) When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value. true boolean messageListenerContainerFactory (advanced) Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom. MessageListenerContainerFactory messageTimestampEnabled (advanced) Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value. true boolean pubSubNoLocal (advanced) Specifies whether to inhibit the delivery of messages published by its own connection. false boolean receiveTimeout (advanced) The timeout for receiving messages (in milliseconds). 1000 long recoveryInterval (advanced) Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds. 5000 long requestTimeoutCheckerInterval (advanced) Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout. 1000 long synchronous (advanced) Sets whether synchronous processing should be strictly used. false boolean transferException (advanced) If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a javax.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!. false boolean transferExchange (advanced) You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!. false boolean useMessageIDAsCorrelationID (advanced) Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages. false boolean waitForProvisionCorrelationToBeUpdatedCounter (advanced) Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled. 50 int waitForProvisionCorrelationToBeUpdatedThreadSleepingTime (advanced) Interval in millis to sleep each time while waiting for provisional correlation id to be updated. 100 long errorHandlerLoggingLevel (logging) Allows to configure the default errorHandler logging level for logging uncaught exceptions. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel errorHandlerLogStackTrace (logging) Allows to control whether stacktraces should be logged or not, by the default errorHandler. true boolean password (security) Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String username (security) Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String transacted (transaction) Specifies whether to use transacted mode. false boolean transactedInOut (transaction) Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction. false boolean lazyCreateTransactionManager (transaction (advanced)) If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true. true boolean transactionManager (transaction (advanced)) The Spring transaction manager to use. PlatformTransactionManager transactionName (transaction (advanced)) The name of the transaction to use. String transactionTimeout (transaction (advanced)) The timeout value of the transaction (in seconds), if using transacted mode. -1 int 47.6. Samples JMS is used in many examples for other components as well. But we provide a few samples below to get started. 47.6.1. Receiving from JMS In the following sample we configure a route that receives JMS messages and routes the message to a POJO: from("jms:queue:foo"). to("bean:myBusinessLogic"); You can of course use any of the EIP patterns so the route can be context based. For example, here's how to filter an order topic for the big spenders: from("jms:topic:OrdersTopic"). filter().method("myBean", "isGoldCustomer"). to("jms:queue:BigSpendersQueue"); 47.6.2. Sending to JMS In the sample below we poll a file folder and send the file content to a JMS topic. As we want the content of the file as a TextMessage instead of a BytesMessage , we need to convert the body to a String : from("file://orders"). convertBodyTo(String.class). to("jms:topic:OrdersTopic"); 47.6.3. Using Annotations Camel also has annotations so you can use POJO Consuming and POJO Producing. 47.6.4. Spring DSL sample The preceding examples use the Java DSL. Camel also supports Spring XML DSL. Here is the big spender sample using Spring DSL: <route> <from uri="jms:topic:OrdersTopic"/> <filter> <method ref="myBean" method="isGoldCustomer"/> <to uri="jms:queue:BigSpendersQueue"/> </filter> </route> 47.6.5. Other samples JMS appears in many of the examples for other components and EIP patterns, as well in this Camel documentation. So feel free to browse the documentation. 47.6.6. Using JMS as a Dead Letter Queue storing Exchange Normally, when using JMS as the transport, it only transfers the body and headers as the payload. If you want to use JMS with a Dead Letter Channel , using a JMS queue as the Dead Letter Queue, then normally the caused Exception is not stored in the JMS message. You can, however, use the transferExchange option on the JMS dead letter queue to instruct Camel to store the entire Exchange in the queue as a javax.jms.ObjectMessage that holds a org.apache.camel.support.DefaultExchangeHolder . This allows you to consume from the Dead Letter Queue and retrieve the caused exception from the Exchange property with the key Exchange.EXCEPTION_CAUGHT . The demo below illustrates this: // setup error handler to use JMS as queue and store the entire Exchange errorHandler(deadLetterChannel("jms:queue:dead?transferExchange=true")); Then you can consume from the JMS queue and analyze the problem: from("jms:queue:dead").to("bean:myErrorAnalyzer"); // and in our bean String body = exchange.getIn().getBody(); Exception cause = exchange.getProperty(Exchange.EXCEPTION_CAUGHT, Exception.class); // the cause message is String problem = cause.getMessage(); 47.6.7. Using JMS as a Dead Letter Channel storing error only You can use JMS to store the cause error message or to store a custom body, which you can initialize yourself. The following example uses the Message Translator EIP to do a transformation on the failed exchange before it is moved to the JMS dead letter queue: // we sent it to a seda dead queue first errorHandler(deadLetterChannel("seda:dead")); // and on the seda dead queue we can do the custom transformation before its sent to the JMS queue from("seda:dead").transform(exceptionMessage()).to("jms:queue:dead"); Here we only store the original cause error message in the transform. You can, however, use any Expression to send whatever you like. For example, you can invoke a method on a Bean or use a custom processor. 47.7. Message Mapping between JMS and Camel Camel automatically maps messages between javax.jms.Message and org.apache.camel.Message . When sending a JMS message, Camel converts the message body to the following JMS message types: Body Type JMS Message Comment String javax.jms.TextMessage org.w3c.dom.Node javax.jms.TextMessage The DOM will be converted to String . Map javax.jms.MapMessage java.io.Serializable javax.jms.ObjectMessage byte[] javax.jms.BytesMessage java.io.File javax.jms.BytesMessage java.io.Reader javax.jms.BytesMessage java.io.InputStream javax.jms.BytesMessage java.nio.ByteBuffer javax.jms.BytesMessage When receiving a JMS message, Camel converts the JMS message to the following body type: JMS Message Body Type javax.jms.TextMessage String javax.jms.BytesMessage byte[] javax.jms.MapMessage Map<String, Object> javax.jms.ObjectMessage Object 47.7.1. Disabling auto-mapping of JMS messages You can use the mapJmsMessage option to disable the auto-mapping above. If disabled, Camel will not try to map the received JMS message, but instead uses it directly as the payload. This allows you to avoid the overhead of mapping and let Camel just pass through the JMS message. For instance, it even allows you to route javax.jms.ObjectMessage JMS messages with classes you do not have on the classpath. 47.7.2. Using a custom MessageConverter You can use the messageConverter option to do the mapping yourself in a Spring org.springframework.jms.support.converter.MessageConverter class. For example, in the route below we use a custom message converter when sending a message to the JMS order queue: from("file://inbox/order").to("jms:queue:order?messageConverter=#myMessageConverter"); You can also use a custom message converter when consuming from a JMS destination. 47.7.3. Controlling the mapping strategy selected You can use the jmsMessageType option on the endpoint URL to force a specific message type for all messages. In the route below, we poll files from a folder and send them as javax.jms.TextMessage as we have forced the JMS producer endpoint to use text messages: from("file://inbox/order").to("jms:queue:order?jmsMessageType=Text"); You can also specify the message type to use for each message by setting the header with the key CamelJmsMessageType . For example: from("file://inbox/order").setHeader("CamelJmsMessageType", JmsMessageType.Text).to("jms:queue:order"); The possible values are defined in the enum class, org.apache.camel.jms.JmsMessageType . 47.8. Message format when sending The exchange that is sent over the JMS wire must conform to the JMS Message spec . For the exchange.in.header the following rules apply for the header keys : Keys starting with JMS or JMSX are reserved. exchange.in.headers keys must be literals and all be valid Java identifiers (do not use dots in the key name). Camel replaces dots & hyphens and the reverse when when consuming JMS messages: . is replaced by `DOT` and the reverse replacement when Camel consumes the message. - is replaced by `HYPHEN` and the reverse replacement when Camel consumes the message. See also the option jmsKeyFormatStrategy , which allows use of your own custom strategy for formatting keys. For the exchange.in.header , the following rules apply for the header values : The values must be primitives or their counter objects (such as Integer , Long , Character ). The types, String , CharSequence , Date , BigDecimal and BigInteger are all converted to their toString() representation. All other types are dropped. Camel will log with category org.apache.camel.component.jms.JmsBinding at DEBUG level if it drops a given header value. For example: 47.9. Message format when receiving Camel adds the following properties to the Exchange when it receives a message: Property Type Description org.apache.camel.jms.replyDestination javax.jms.Destination The reply destination. Camel adds the following JMS properties to the In message headers when it receives a JMS message: Header Type Description JMSCorrelationID String The JMS correlation ID. JMSDeliveryMode int The JMS delivery mode. JMSDestination javax.jms.Destination The JMS destination. JMSExpiration long The JMS expiration. JMSMessageID String The JMS unique message ID. JMSPriority int The JMS priority (with 0 as the lowest priority and 9 as the highest). JMSRedelivered boolean Is the JMS message redelivered. JMSReplyTo javax.jms.Destination The JMS reply-to destination. JMSTimestamp long The JMS timestamp. JMSType String The JMS type. JMSXGroupID String The JMS group ID. As all the above information is standard JMS you can check the JMS documentation for further details. 47.10. About using Camel to send and receive messages and JMSReplyTo The JMS component is complex and you have to pay close attention to how it works in some cases. So this is a short summary of some of the areas/pitfalls to look for. When Camel sends a message using its JMSProducer , it checks the following conditions: The message exchange pattern, Whether a JMSReplyTo was set in the endpoint or in the message headers, Whether any of the following options have been set on the JMS endpoint: disableReplyTo , preserveMessageQos , explicitQosEnabled . All this can be a tad complex to understand and configure to support your use case. 47.10.1. JmsProducer The JmsProducer behaves as follows, depending on configuration: Exchange Pattern Other options Description InOut - Camel will expect a reply, set a temporary JMSReplyTo , and after sending the message, it will start to listen for the reply message on the temporary queue. InOut JMSReplyTo is set Camel will expect a reply and, after sending the message, it will start to listen for the reply message on the specified JMSReplyTo queue. InOnly - Camel will send the message and not expect a reply. InOnly JMSReplyTo is set By default, Camel discards the JMSReplyTo destination and clears the JMSReplyTo header before sending the message. Camel then sends the message and does not expect a reply. Camel logs this in the log at WARN level (changed to DEBUG level from Camel 2.6 onwards. You can use preserveMessageQuo=true to instruct Camel to keep the JMSReplyTo . In all situations the JmsProducer does not expect any reply and thus continue after sending the message. 47.10.2. JmsConsumer The JmsConsumer behaves as follows, depending on configuration: Exchange Pattern Other options Description InOut - Camel will send the reply back to the JMSReplyTo queue. InOnly - Camel will not send a reply back, as the pattern is InOnly . - disableReplyTo=true This option suppresses replies. So pay attention to the message exchange pattern set on your exchanges. If you send a message to a JMS destination in the middle of your route you can specify the exchange pattern to use, see more at Request Reply. This is useful if you want to send an InOnly message to a JMS topic: from("activemq:queue:in") .to("bean:validateOrder") .to(ExchangePattern.InOnly, "activemq:topic:order") .to("bean:handleOrder"); 47.11. Reuse endpoint and send to different destinations computed at runtime If you need to send messages to a lot of different JMS destinations, it makes sense to reuse a JMS endpoint and specify the real destination in a message header. This allows Camel to reuse the same endpoint, but send to different destinations. This greatly reduces the number of endpoints created and economizes on memory and thread resources. You can specify the destination in the following headers: Header Type Description CamelJmsDestination javax.jms.Destination A destination object. CamelJmsDestinationName String The destination name. For example, the following route shows how you can compute a destination at run time and use it to override the destination appearing in the JMS URL: from("file://inbox") .to("bean:computeDestination") .to("activemq:queue:dummy"); The queue name, dummy , is just a placeholder. It must be provided as part of the JMS endpoint URL, but it will be ignored in this example. In the computeDestination bean, specify the real destination by setting the CamelJmsDestinationName header as follows: public void setJmsHeader(Exchange exchange) { String id = .... exchange.getIn().setHeader("CamelJmsDestinationName", "order:" + id"); } Then Camel will read this header and use it as the destination instead of the one configured on the endpoint. So, in this example Camel sends the message to activemq:queue:order:2 , assuming the id value was 2. If both the CamelJmsDestination and the CamelJmsDestinationName headers are set, CamelJmsDestination takes priority. Keep in mind that the JMS producer removes both CamelJmsDestination and CamelJmsDestinationName headers from the exchange and do not propagate them to the created JMS message in order to avoid the accidental loops in the routes (in scenarios when the message will be forwarded to the another JMS endpoint). 47.12. Configuring different JMS providers You can configure your JMS provider in Spring XML as follows: Basically, you can configure as many JMS component instances as you wish and give them a unique name using the id attribute . The preceding example configures an activemq component. You could do the same to configure MQSeries, TibCo, BEA, Sonic and so on. Once you have a named JMS component, you can then refer to endpoints within that component using URIs. For example for the component name, activemq , you can then refer to destinations using the URI format, activemq:[queue:|topic:]destinationName . You can use the same approach for all other JMS providers. This works by the SpringCamelContext lazily fetching components from the spring context for the scheme name you use for Endpoint URIs and having the Component resolve the endpoint URIs. 47.12.1. Using JNDI to find the ConnectionFactory If you are using a J2EE container, you might need to look up JNDI to find the JMS ConnectionFactory rather than use the usual <bean> mechanism in Spring. You can do this using Spring's factory bean or the new Spring XML namespace. For example: <bean id="weblogic" class="org.apache.camel.component.jms.JmsComponent"> <property name="connectionFactory" ref="myConnectionFactory"/> </bean> <jee:jndi-lookup id="myConnectionFactory" jndi-name="jms/connectionFactory"/> See The jee schema in the Spring reference documentation for more details about JNDI lookup. 47.13. Concurrent Consuming A common requirement with JMS is to consume messages concurrently in multiple threads in order to make an application more responsive. You can set the concurrentConsumers option to specify the number of threads servicing the JMS endpoint, as follows: from("jms:SomeQueue?concurrentConsumers=20"). bean(MyClass.class); You can configure this option in one of the following ways: On the JmsComponent , On the endpoint URI or, By invoking setConcurrentConsumers() directly on the JmsEndpoint . 47.13.1. Concurrent Consuming with async consumer Notice that each concurrent consumer will only pickup the available message from the JMS broker, when the current message has been fully processed. You can set the option asyncConsumer=true to let the consumer pickup the message from the JMS queue, while the message is being processed asynchronously (by the Asynchronous Routing Engine). See more details in the table on top of the page about the asyncConsumer option. from("jms:SomeQueue?concurrentConsumers=20&asyncConsumer=true"). bean(MyClass.class); 47.14. Request-reply over JMS Camel supports Request Reply over JMS. In essence the MEP of the Exchange should be InOut when you send a message to a JMS queue. Camel offers a number of options to configure request/reply over JMS that influence performance and clustered environments. The table below summaries the options. Option Performance Cluster Description Temporary Fast Yes A temporary queue is used as reply queue, and automatic created by Camel. To use this do not specify a replyTo queue name. And you can optionally configure replyToType=Temporary to make it stand out that temporary queues are in use. Shared Slow Yes A shared persistent queue is used as reply queue. The queue must be created beforehand, although some brokers can create them on the fly such as Apache ActiveMQ. To use this you must specify the replyTo queue name. And you can optionally configure replyToType=Shared to make it stand out that shared queues are in use. A shared queue can be used in a clustered environment with multiple nodes running this Camel application at the same time. All using the same shared reply queue. This is possible because JMS Message selectors are used to correlate expected reply messages; this impacts performance though. JMS Message selectors is slower, and therefore not as fast as Temporary or Exclusive queues. See further below how to tweak this for better performance. Exclusive Fast No (*Yes) An exclusive persistent queue is used as reply queue. The queue must be created beforehand, although some brokers can create them on the fly such as Apache ActiveMQ. To use this you must specify the replyTo queue name. And you must configure replyToType=Exclusive to instruct Camel to use exclusive queues, as Shared is used by default, if a replyTo queue name was configured. When using exclusive reply queues, then JMS Message selectors are not in use, and therefore other applications must not use this queue as well. An exclusive queue cannot be used in a clustered environment with multiple nodes running this Camel application at the same time; as we do not have control if the reply queue comes back to the same node that sent the request message; that is why shared queues use JMS Message selectors to make sure of this. Though if you configure each Exclusive reply queue with an unique name per node, then you can run this in a clustered environment. As then the reply message will be sent back to that queue for the given node, that awaits the reply message. concurrentConsumers Fast Yes Allows to process reply messages concurrently using concurrent message listeners in use. You can specify a range using the concurrentConsumers and maxConcurrentConsumers options. Notice: That using Shared reply queues may not work as well with concurrent listeners, so use this option with care. maxConcurrentConsumers Fast Yes Allows to process reply messages concurrently using concurrent message listeners in use. You can specify a range using the concurrentConsumers and maxConcurrentConsumers options. Notice: That using Shared reply queues may not work as well with concurrent listeners, so use this option with care. The JmsProducer detects the InOut and provides a JMSReplyTo header with the reply destination to be used. By default Camel uses a temporary queue, but you can use the replyTo option on the endpoint to specify a fixed reply queue (see more below about fixed reply queue). Camel will automatically setup a consumer which listen on the reply queue, so you should not do anything. This consumer is a Spring DefaultMessageListenerContainer which listen for replies. However it's fixed to 1 concurrent consumer. That means replies will be processed in sequence as there are only 1 thread to process the replies. You can configure the listener to use concurrent threads using the concurrentConsumers and maxConcurrentConsumers options. This allows you to easier configure this in Camel as shown below: from(xxx) .inOut().to("activemq:queue:foo?concurrentConsumers=5") .to(yyy) .to(zzz); In this route we instruct Camel to route replies asynchronously using a thread pool with 5 threads. 47.14.1. Request-reply over JMS and using a shared fixed reply queue If you use a fixed reply queue when doing Request Reply over JMS as shown in the example below, then pay attention. from(xxx) .inOut().to("activemq:queue:foo?replyTo=bar") .to(yyy) In this example the fixed reply queue named "bar" is used. By default Camel assumes the queue is shared when using fixed reply queues, and therefore it uses a JMSSelector to only pickup the expected reply messages (eg based on the JMSCorrelationID ). See section for exclusive fixed reply queues. That means its not as fast as temporary queues. You can speedup how often Camel will pull for reply messages using the receiveTimeout option. By default its 1000 millis. So to make it faster you can set it to 250 millis to pull 4 times per second as shown: from(xxx) .inOut().to("activemq:queue:foo?replyTo=bar&receiveTimeout=250") .to(yyy) Notice this will cause the Camel to send pull requests to the message broker more frequent, and thus require more network traffic. It is generally recommended to use temporary queues if possible. 47.14.2. Request-reply over JMS and using an exclusive fixed reply queue In the example, Camel would anticipate the fixed reply queue named "bar" was shared, and thus it uses a JMSSelector to only consume reply messages which it expects. However there is a drawback doing this as the JMS selector is slower. Also the consumer on the reply queue is slower to update with new JMS selector ids. In fact it only updates when the receiveTimeout option times out, which by default is 1 second. So in theory the reply messages could take up till about 1 sec to be detected. On the other hand if the fixed reply queue is exclusive to the Camel reply consumer, then we can avoid using the JMS selectors, and thus be more performant. In fact as fast as using temporary queues. There is the ReplyToType option which you can configure to Exclusive to tell Camel that the reply queue is exclusive as shown in the example below: from(xxx) .inOut().to("activemq:queue:foo?replyTo=bar&replyToType=Exclusive") .to(yyy) Mind that the queue must be exclusive to each and every endpoint. So if you have two routes, then they each need an unique reply queue as shown in the example: from(xxx) .inOut().to("activemq:queue:foo?replyTo=bar&replyToType=Exclusive") .to(yyy) from(aaa) .inOut().to("activemq:queue:order?replyTo=order.reply&replyToType=Exclusive") .to(bbb) The same applies if you run in a clustered environment. Then each node in the cluster must use an unique reply queue name. As otherwise each node in the cluster may pickup messages which was intended as a reply on another node. For clustered environments its recommended to use shared reply queues instead. 47.15. Synchronizing clocks between senders and receivers When doing messaging between systems, its desirable that the systems have synchronized clocks. For example when sending a JMS message, then you can set a time to live value on the message. Then the receiver can inspect this value, and determine if the message is already expired, and thus drop the message instead of consume and process it. However this requires that both sender and receiver have synchronized clocks. If you are using ActiveMQ then you can use the timestamp plugin to synchronize clocks. 47.16. About time to live Read first above about synchronized clocks. When you do request/reply (InOut) over JMS with Camel then Camel uses a timeout on the sender side, which is default 20 seconds from the requestTimeout option. You can control this by setting a higher/lower value. However the time to live value is still set on the message being send. So that requires the clocks to be synchronized between the systems. If they are not, then you may want to disable the time to live value being set. This is now possible using the disableTimeToLive option from Camel 2.8 onwards. So if you set this option to disableTimeToLive=true , then Camel does not set any time to live value when sending JMS messages. But the request timeout is still active. So for example if you do request/reply over JMS and have disabled time to live, then Camel will still use a timeout by 20 seconds (the requestTimeout option). That option can of course also be configured. So the two options requestTimeout and disableTimeToLive gives you fine grained control when doing request/reply. You can provide a header in the message to override and use as the request timeout value instead of the endpoint configured value. For example: from("direct:someWhere") .to("jms:queue:foo?replyTo=bar&requestTimeout=30s") .to("bean:processReply"); In the route above we have a endpoint configured requestTimeout of 30 seconds. So Camel will wait up till 30 seconds for that reply message to come back on the bar queue. If no reply message is received then a org.apache.camel.ExchangeTimedOutException is set on the Exchange and Camel continues routing the message, which would then fail due the exception, and Camel's error handler reacts. If you want to use a per message timeout value, you can set the header with key org.apache.camel.component.jms.JmsConstants#JMS_REQUEST_TIMEOUT which has constant value "CamelJmsRequestTimeout" with a timeout value as long type. For example we can use a bean to compute the timeout value per individual message, such as calling the "whatIsTheTimeout" method on the service bean as shown below: from("direct:someWhere") .setHeader("CamelJmsRequestTimeout", method(ServiceBean.class, "whatIsTheTimeout")) .to("jms:queue:foo?replyTo=bar&requestTimeout=30s") .to("bean:processReply"); When you do fire and forget (InOut) over JMS with Camel then Camel by default does not set any time to live value on the message. You can configure a value by using the timeToLive option. For example to indicate a 5 sec., you set timeToLive=5000 . The option disableTimeToLive can be used to force disabling the time to live, also for InOnly messaging. The requestTimeout option is not being used for InOnly messaging. 47.17. Enabling Transacted Consumption A common requirement is to consume from a queue in a transaction and then process the message using the Camel route. To do this, just ensure that you set the following properties on the component/endpoint: transacted = true transactionManager = a Transsaction Manager - typically the JmsTransactionManager See the Transactional Client EIP pattern for further details. Transactions and [Request Reply] over JMS When using Request Reply over JMS you cannot use a single transaction; JMS will not send any messages until a commit is performed, so the server side won't receive anything at all until the transaction commits. Therefore to use Request Reply you must commit a transaction after sending the request and then use a separate transaction for receiving the response. To address this issue the JMS component uses different properties to specify transaction use for oneway messaging and request reply messaging: The transacted property applies only to the InOnly message Exchange Pattern (MEP). You can leverage the DMLC transacted session API using the following properties on component/endpoint: transacted = true lazyCreateTransactionManager = false The benefit of doing so is that the cacheLevel setting will be honored when using local transactions without a configured TransactionManager. When a TransactionManager is configured, no caching happens at DMLC level and it is necessary to rely on a pooled connection factory. For more details about this kind of setup, see here and here . 47.18. Using JMSReplyTo for late replies When using Camel as a JMS listener, it sets an Exchange property with the value of the ReplyTo javax.jms.Destination object, having the key ReplyTo . You can obtain this Destination as follows: Destination replyDestination = exchange.getIn().getHeader(JmsConstants.JMS_REPLY_DESTINATION, Destination.class); And then later use it to send a reply using regular JMS or Camel. // we need to pass in the JMS component, and in this sample we use ActiveMQ JmsEndpoint endpoint = JmsEndpoint.newInstance(replyDestination, activeMQComponent); // now we have the endpoint we can use regular Camel API to send a message to it template.sendBody(endpoint, "Here is the late reply."); A different solution to sending a reply is to provide the replyDestination object in the same Exchange property when sending. Camel will then pick up this property and use it for the real destination. The endpoint URI must include a dummy destination, however. For example: // we pretend to send it to some non existing dummy queue template.send("activemq:queue:dummy, new Processor() { public void process(Exchange exchange) throws Exception { // and here we override the destination with the ReplyTo destination object so the message is sent to there instead of dummy exchange.getIn().setHeader(JmsConstants.JMS_DESTINATION, replyDestination); exchange.getIn().setBody("Here is the late reply."); } } 47.19. Using a request timeout In the sample below we send a Request Reply style message Exchange (we use the requestBody method = InOut ) to the slow queue for further processing in Camel and we wait for a return reply: 47.20. Sending an InOnly message and keeping the JMSReplyTo header When sending to a JMS destination using camel-jms the producer will use the MEP to detect if its InOnly or InOut messaging. However there can be times where you want to send an InOnly message but keeping the JMSReplyTo header. To do so you have to instruct Camel to keep it, otherwise the JMSReplyTo header will be dropped. For example to send an InOnly message to the foo queue, but with a JMSReplyTo with bar queue you can do as follows: template.send("activemq:queue:foo?preserveMessageQos=true", new Processor() { public void process(Exchange exchange) throws Exception { exchange.getIn().setBody("World"); exchange.getIn().setHeader("JMSReplyTo", "bar"); } }); Notice we use preserveMessageQos=true to instruct Camel to keep the JMSReplyTo header. 47.21. Setting JMS provider options on the destination Some JMS providers, like IBM's WebSphere MQ need options to be set on the JMS destination. For example, you may need to specify the targetClient option. Since targetClient is a WebSphere MQ option and not a Camel URI option, you need to set that on the JMS destination name like so: // ... .setHeader("CamelJmsDestinationName", constant("queue:///MY_QUEUE?targetClient=1")) .to("wmq:queue:MY_QUEUE?useMessageIDAsCorrelationID=true"); Some versions of WMQ won't accept this option on the destination name and you will get an exception like: A workaround is to use a custom DestinationResolver: JmsComponent wmq = new JmsComponent(connectionFactory); wmq.setDestinationResolver(new DestinationResolver() { public Destination resolveDestinationName(Session session, String destinationName, boolean pubSubDomain) throws JMSException { MQQueueSession wmqSession = (MQQueueSession) session; return wmqSession.createQueue("queue:///" + destinationName + "?targetClient=1"); } }); 47.22. Spring Boot Auto-Configuration The component supports 99 options, which are listed below. Name Description Default Type camel.component.jms.accept-messages-while-stopping Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option. false Boolean camel.component.jms.acknowledgement-mode-name The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE. AUTO_ACKNOWLEDGE String camel.component.jms.allow-additional-headers This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example some message systems such as WMQ do this with header names using prefix JMS_IBM_MQMD_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching. String camel.component.jms.allow-auto-wired-connection-factory Whether to auto-discover ConnectionFactory from the registry, if no connection factory has been configured. If only one instance of ConnectionFactory is found then it will be used. This is enabled by default. true Boolean camel.component.jms.allow-auto-wired-destination-resolver Whether to auto-discover DestinationResolver from the registry, if no destination resolver has been configured. If only one instance of DestinationResolver is found then it will be used. This is enabled by default. true Boolean camel.component.jms.allow-null-body Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. true Boolean camel.component.jms.allow-reply-manager-quick-stop Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag. false Boolean camel.component.jms.allow-serialized-headers Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false Boolean camel.component.jms.always-copy-message If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set). false Boolean camel.component.jms.artemis-consumer-priority Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer). Integer camel.component.jms.artemis-streaming-enabled Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used. false Boolean camel.component.jms.async-consumer Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the message from the JMS queue, while the message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions). false Boolean camel.component.jms.async-start-listener Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. false Boolean camel.component.jms.async-stop-listener Whether to stop the JmsConsumer message listener asynchronously, when stopping a route. false Boolean camel.component.jms.auto-startup Specifies whether the consumer container should auto-startup. true Boolean camel.component.jms.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.jms.cache-level Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details. Integer camel.component.jms.cache-level-name Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE_AUTO, CACHE_CONNECTION, CACHE_CONSUMER, CACHE_NONE, and CACHE_SESSION. The default setting is CACHE_AUTO. See the Spring documentation and Transactions Cache Levels for more information. CACHE_AUTO String camel.component.jms.client-id Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions. If using Apache ActiveMQ you may prefer to use Virtual Topics instead. String camel.component.jms.concurrent-consumers Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. 1 Integer camel.component.jms.configuration To use a shared JMS configuration. The option is a org.apache.camel.component.jms.JmsConfiguration type. JmsConfiguration camel.component.jms.connection-factory The connection factory to be use. A connection factory must be configured either on the component or endpoint. The option is a javax.jms.ConnectionFactory type. ConnectionFactory camel.component.jms.consumer-type The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. ConsumerType camel.component.jms.correlation-property When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel. String camel.component.jms.default-task-executor-type Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached threadpool-like). If not set, it defaults to the behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers. DefaultTaskExecutorType camel.component.jms.delivery-delay Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker. -1 Long camel.component.jms.delivery-mode Specifies the delivery mode to be used. Possible values are those defined by javax.jms.DeliveryMode. NON_PERSISTENT = 1 and PERSISTENT = 2. Integer camel.component.jms.delivery-persistent Specifies whether persistent delivery is used by default. true Boolean camel.component.jms.destination-resolver A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry). The option is a org.springframework.jms.support.destination.DestinationResolver type. DestinationResolver camel.component.jms.disable-reply-to Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another. false Boolean camel.component.jms.disable-time-to-live Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details. false Boolean camel.component.jms.durable-subscription-name The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well. String camel.component.jms.eager-loading-of-properties Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody. false Boolean camel.component.jms.eager-poison-body If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties. Poison JMS message due to USD\{exception.message} String camel.component.jms.enabled Whether to enable auto configuration of the jms component. This is enabled by default. Boolean camel.component.jms.error-handler Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler. The option is a org.springframework.util.ErrorHandler type. ErrorHandler camel.component.jms.error-handler-log-stack-trace Allows to control whether stacktraces should be logged or not, by the default errorHandler. true Boolean camel.component.jms.error-handler-logging-level Allows to configure the default errorHandler logging level for logging uncaught exceptions. LoggingLevel camel.component.jms.exception-listener Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. The option is a javax.jms.ExceptionListener type. ExceptionListener camel.component.jms.explicit-qos-enabled Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers. false Boolean camel.component.jms.expose-listener-session Specifies whether the listener session should be exposed when consuming messages. false Boolean camel.component.jms.force-send-original-message When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received. false Boolean camel.component.jms.format-date-headers-to-iso8601 Sets whether JMS date properties should be formatted according to the ISO 8601 standard. false Boolean camel.component.jms.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. HeaderFilterStrategy camel.component.jms.idle-consumer-limit Specify the limit for the number of consumers that are allowed to be idle at any given time. 1 Integer camel.component.jms.idle-task-execution-limit Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring. 1 Integer camel.component.jms.include-all-j-m-s-x-properties Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. false Boolean camel.component.jms.include-sent-j-m-s-message-i-d Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination. false Boolean camel.component.jms.jms-key-format-strategy Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. JmsKeyFormatStrategy camel.component.jms.jms-message-type Allows you to force the use of a specific javax.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it. JmsMessageType camel.component.jms.lazy-create-transaction-manager If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true. true Boolean camel.component.jms.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.jms.map-jms-message Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. true Boolean camel.component.jms.max-concurrent-consumers Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. Integer camel.component.jms.max-messages-per-task The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required. -1 Integer camel.component.jms.message-converter To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a javax.jms.Message. The option is a org.springframework.jms.support.converter.MessageConverter type. MessageConverter camel.component.jms.message-created-strategy To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. The option is a org.apache.camel.component.jms.MessageCreatedStrategy type. MessageCreatedStrategy camel.component.jms.message-id-enabled When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value. true Boolean camel.component.jms.message-listener-container-factory Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom. The option is a org.apache.camel.component.jms.MessageListenerContainerFactory type. MessageListenerContainerFactory camel.component.jms.message-timestamp-enabled Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value. true Boolean camel.component.jms.password Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String camel.component.jms.preserve-message-qos Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header. false Boolean camel.component.jms.priority Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect. 4 Integer camel.component.jms.pub-sub-no-local Specifies whether to inhibit the delivery of messages published by its own connection. false Boolean camel.component.jms.queue-browse-strategy To use a custom QueueBrowseStrategy when browsing queues. The option is a org.apache.camel.component.jms.QueueBrowseStrategy type. QueueBrowseStrategy camel.component.jms.receive-timeout The timeout for receiving messages (in milliseconds). The option is a long type. 1000 Long camel.component.jms.recovery-interval Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds. The option is a long type. 5000 Long camel.component.jms.reply-to Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer). String camel.component.jms.reply-to-cache-level-name Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE_NONE to work. Note: If using temporary queues then CACHE_NONE is not allowed, and you must use a higher value such as CACHE_CONSUMER or CACHE_SESSION. String camel.component.jms.reply-to-concurrent-consumers Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. 1 Integer camel.component.jms.reply-to-delivery-persistent Specifies whether to use persistent delivery by default for replies. true Boolean camel.component.jms.reply-to-destination-selector-name Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue). String camel.component.jms.reply-to-max-concurrent-consumers Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. Integer camel.component.jms.reply-to-on-timeout-max-concurrent-consumers Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS. 1 Integer camel.component.jms.reply-to-override Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination. String camel.component.jms.reply-to-same-destination-allowed Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself. false Boolean camel.component.jms.reply-to-type Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive. ReplyToType camel.component.jms.request-timeout The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option. The option is a long type. 20000 Long camel.component.jms.request-timeout-checker-interval Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout. The option is a long type. 1000 Long camel.component.jms.selector Sets the JMS selector to use. String camel.component.jms.stream-message-type-enabled Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data. false Boolean camel.component.jms.subscription-durable Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. false Boolean camel.component.jms.subscription-name Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0). String camel.component.jms.subscription-shared Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker. false Boolean camel.component.jms.synchronous Sets whether synchronous processing should be strictly used. false Boolean camel.component.jms.task-executor Allows you to specify a custom task executor for consuming messages. The option is a org.springframework.core.task.TaskExecutor type. TaskExecutor camel.component.jms.test-connection-on-startup Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. false Boolean camel.component.jms.time-to-live When sending messages, specifies the time-to-live of the message (in milliseconds). -1 Long camel.component.jms.transacted Specifies whether to use transacted mode. false Boolean camel.component.jms.transacted-in-out Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction. false Boolean camel.component.jms.transaction-manager The Spring transaction manager to use. The option is a org.springframework.transaction.PlatformTransactionManager type. PlatformTransactionManager camel.component.jms.transaction-name The name of the transaction to use. String camel.component.jms.transaction-timeout The timeout value of the transaction (in seconds), if using transacted mode. -1 Integer camel.component.jms.transfer-exception If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a javax.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!. false Boolean camel.component.jms.transfer-exchange You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!. false Boolean camel.component.jms.use-message-i-d-as-correlation-i-d Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages. false Boolean camel.component.jms.username Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String camel.component.jms.wait-for-provision-correlation-to-be-updated-counter Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled. 50 Integer camel.component.jms.wait-for-provision-correlation-to-be-updated-thread-sleeping-time Interval in millis to sleep each time while waiting for provisional correlation id to be updated. The option is a long type. 100 Long | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jms-starter</artifactId> </dependency>",
"jms:[queue:|topic:]destinationName[?options]",
"jms:FOO.BAR",
"jms:queue:FOO.BAR",
"jms:topic:Stocks.Prices",
"jms:destinationType:destinationName",
"from(\"jms:queue:foo\"). to(\"bean:myBusinessLogic\");",
"from(\"jms:topic:OrdersTopic\"). filter().method(\"myBean\", \"isGoldCustomer\"). to(\"jms:queue:BigSpendersQueue\");",
"from(\"file://orders\"). convertBodyTo(String.class). to(\"jms:topic:OrdersTopic\");",
"<route> <from uri=\"jms:topic:OrdersTopic\"/> <filter> <method ref=\"myBean\" method=\"isGoldCustomer\"/> <to uri=\"jms:queue:BigSpendersQueue\"/> </filter> </route>",
"// setup error handler to use JMS as queue and store the entire Exchange errorHandler(deadLetterChannel(\"jms:queue:dead?transferExchange=true\"));",
"from(\"jms:queue:dead\").to(\"bean:myErrorAnalyzer\"); // and in our bean String body = exchange.getIn().getBody(); Exception cause = exchange.getProperty(Exchange.EXCEPTION_CAUGHT, Exception.class); // the cause message is String problem = cause.getMessage();",
"// we sent it to a seda dead queue first errorHandler(deadLetterChannel(\"seda:dead\")); // and on the seda dead queue we can do the custom transformation before its sent to the JMS queue from(\"seda:dead\").transform(exceptionMessage()).to(\"jms:queue:dead\");",
"from(\"file://inbox/order\").to(\"jms:queue:order?messageConverter=#myMessageConverter\");",
"from(\"file://inbox/order\").to(\"jms:queue:order?jmsMessageType=Text\");",
"from(\"file://inbox/order\").setHeader(\"CamelJmsMessageType\", JmsMessageType.Text).to(\"jms:queue:order\");",
"2008-07-09 06:43:04,046 [main ] DEBUG JmsBinding - Ignoring non primitive header: order of class: org.apache.camel.component.jms.issues.DummyOrder with value: DummyOrder{orderId=333, itemId=4444, quantity=2}",
"from(\"activemq:queue:in\") .to(\"bean:validateOrder\") .to(ExchangePattern.InOnly, \"activemq:topic:order\") .to(\"bean:handleOrder\");",
"from(\"file://inbox\") .to(\"bean:computeDestination\") .to(\"activemq:queue:dummy\");",
"public void setJmsHeader(Exchange exchange) { String id = . exchange.getIn().setHeader(\"CamelJmsDestinationName\", \"order:\" + id\"); }",
"<bean id=\"weblogic\" class=\"org.apache.camel.component.jms.JmsComponent\"> <property name=\"connectionFactory\" ref=\"myConnectionFactory\"/> </bean> <jee:jndi-lookup id=\"myConnectionFactory\" jndi-name=\"jms/connectionFactory\"/>",
"from(\"jms:SomeQueue?concurrentConsumers=20\"). bean(MyClass.class);",
"from(\"jms:SomeQueue?concurrentConsumers=20&asyncConsumer=true\"). bean(MyClass.class);",
"from(xxx) .inOut().to(\"activemq:queue:foo?concurrentConsumers=5\") .to(yyy) .to(zzz);",
"from(xxx) .inOut().to(\"activemq:queue:foo?replyTo=bar\") .to(yyy)",
"from(xxx) .inOut().to(\"activemq:queue:foo?replyTo=bar&receiveTimeout=250\") .to(yyy)",
"from(xxx) .inOut().to(\"activemq:queue:foo?replyTo=bar&replyToType=Exclusive\") .to(yyy)",
"from(xxx) .inOut().to(\"activemq:queue:foo?replyTo=bar&replyToType=Exclusive\") .to(yyy) from(aaa) .inOut().to(\"activemq:queue:order?replyTo=order.reply&replyToType=Exclusive\") .to(bbb)",
"from(\"direct:someWhere\") .to(\"jms:queue:foo?replyTo=bar&requestTimeout=30s\") .to(\"bean:processReply\");",
"from(\"direct:someWhere\") .setHeader(\"CamelJmsRequestTimeout\", method(ServiceBean.class, \"whatIsTheTimeout\")) .to(\"jms:queue:foo?replyTo=bar&requestTimeout=30s\") .to(\"bean:processReply\");",
"Destination replyDestination = exchange.getIn().getHeader(JmsConstants.JMS_REPLY_DESTINATION, Destination.class);",
"// we need to pass in the JMS component, and in this sample we use ActiveMQ JmsEndpoint endpoint = JmsEndpoint.newInstance(replyDestination, activeMQComponent); // now we have the endpoint we can use regular Camel API to send a message to it template.sendBody(endpoint, \"Here is the late reply.\");",
"// we pretend to send it to some non existing dummy queue template.send(\"activemq:queue:dummy, new Processor() { public void process(Exchange exchange) throws Exception { // and here we override the destination with the ReplyTo destination object so the message is sent to there instead of dummy exchange.getIn().setHeader(JmsConstants.JMS_DESTINATION, replyDestination); exchange.getIn().setBody(\"Here is the late reply.\"); } }",
"template.send(\"activemq:queue:foo?preserveMessageQos=true\", new Processor() { public void process(Exchange exchange) throws Exception { exchange.getIn().setBody(\"World\"); exchange.getIn().setHeader(\"JMSReplyTo\", \"bar\"); } });",
"// .setHeader(\"CamelJmsDestinationName\", constant(\"queue:///MY_QUEUE?targetClient=1\")) .to(\"wmq:queue:MY_QUEUE?useMessageIDAsCorrelationID=true\");",
"com.ibm.msg.client.jms.DetailedJMSException: JMSCC0005: The specified value 'MY_QUEUE?targetClient=1' is not allowed for 'XMSC_DESTINATION_NAME'",
"JmsComponent wmq = new JmsComponent(connectionFactory); wmq.setDestinationResolver(new DestinationResolver() { public Destination resolveDestinationName(Session session, String destinationName, boolean pubSubDomain) throws JMSException { MQQueueSession wmqSession = (MQQueueSession) session; return wmqSession.createQueue(\"queue:///\" + destinationName + \"?targetClient=1\"); } });"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-jms-component-starter |
function::kernel_short | function::kernel_short Name function::kernel_short - Retrieves a short value stored in kernel memory Synopsis Arguments addr The kernel address to retrieve the short from Description Returns the short value from a given kernel memory address. Reports an error when reading from the given address fails. | [
"kernel_short:long(addr:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-kernel-short |
Chapter 13. Intercepting Messages | Chapter 13. Intercepting Messages With AMQ Broker you can intercept packets entering or exiting the broker, allowing you to audit packets or filter messages. Interceptors can change the packets they intercept, which makes them powerful, but also potentially dangerous. You can develop interceptors to meet your business requirements. Interceptors are protocol specific and must implement the appropriate interface. Interceptors must implement the intercept() method, which returns a boolean value. If the value is true , the message packet continues onward. If false , the process is aborted, no other interceptors are called, and the message packet is not processed further. 13.1. Creating Interceptors You can create your own incoming and outgoing interceptors. All interceptors are protocol specific and are called for any packet entering or exiting the server respectively. This allows you to create interceptors to meet business requirements such as auditing packets. Interceptors can change the packets they intercept. This makes them powerful as well as potentially dangerous, so be sure to use them with caution. Interceptors and their dependencies must be placed in the Java classpath of the broker. You can use the BROKER_INSTANCE_DIR /lib directory since it is part of the classpath by default. Procedure The following examples demonstrate how to create an interceptor that checks the size of each packet passed to it. Note that the examples implement a specific interface for each protocol. Implement the appropriate interface and override its intercept() method. If you are using the AMQP protocol, implement the org.apache.activemq.artemis.protocol.amqp.broker.AmqpInterceptor interface. package com.example; import org.apache.activemq.artemis.protocol.amqp.broker.AMQPMessage; import org.apache.activemq.artemis.protocol.amqp.broker.AmqpInterceptor; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements AmqpInterceptor { private final int ACCEPTABLE_SIZE = 1024; @Override public boolean intercept(final AMQPMessage message, RemotingConnection connection) { int size = message.getEncodeSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println("This AMQPMessage has an acceptable size."); return true; } return false; } } If you are using Core Protocol, your interceptor must implement the org.apache.artemis.activemq.api.core.Interceptor interface. package com.example; import org.apache.artemis.activemq.api.core.Interceptor; import org.apache.activemq.artemis.core.protocol.core.Packet; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(Packet packet, RemotingConnection connection) throws ActiveMQException { int size = packet.getPacketSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println("This Packet has an acceptable size."); return true; } return false; } } If you are using the MQTT protocol, implement the org.apache.activemq.artemis.core.protocol.mqtt.MQTTInterceptor interface. package com.example; import org.apache.activemq.artemis.core.protocol.mqtt.MQTTInterceptor; import io.netty.handler.codec.mqtt.MqttMessage; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(MqttMessage mqttMessage, RemotingConnection connection) throws ActiveMQException { byte[] msg = (mqttMessage.toString()).getBytes(); int size = msg.length; if (size <= ACCEPTABLE_SIZE) { System.out.println("This MqttMessage has an acceptable size."); return true; } return false; } } If you are using the STOMP protocol, implement the org.apache.activemq.artemis.core.protocol.stomp.StompFrameInterceptor interface. package com.example; import org.apache.activemq.artemis.core.protocol.stomp.StompFrameInterceptor; import org.apache.activemq.artemis.core.protocol.stomp.StompFrame; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(StompFrame stompFrame, RemotingConnection connection) throws ActiveMQException { int size = stompFrame.getEncodedSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println("This StompFrame has an acceptable size."); return true; } return false; } } 13.2. Configuring the Broker to Use Interceptors Once you have created an interceptor, you must configure the broker to use it. Prerequisites You must create an interceptor class and add it (and its dependencies) to the Java classpath of the broker before you can configure it for use by the broker. You can use the BROKER_INSTANCE_DIR /lib directory since it is part of the classpath by default. Procedure Configure the broker to use an interceptor by adding configuration to BROKER_INSTANCE_DIR /etc/broker.xml If your interceptor is intended for incoming messages, add its class-name to the list of remoting-incoming-interceptors . <configuration> <core> ... <remoting-incoming-interceptors> <class-name>org.example.MyIncomingInterceptor</class-name> </remoting-incoming-interceptors> ... </core> </configuration> If your interceptor is intended for outgoing messages, add its class-name to the list of remoting-outgoing-interceptors . <configuration> <core> ... <remoting-outgoing-interceptors> <class-name>org.example.MyOutgoingInterceptor</class-name> </remoting-outgoing-interceptors> </core> </configuration> 13.3. Interceptors on the Client Side Clients can use interceptors to intercept packets either sent by the client to the server or by the server to the client. As in the case of a broker-side interceptor, if it returns false , no other interceptors are called and the client does not process the packet further. This process happens transparently to the client except when an outgoing packet is sent in a blocking fashion. In those cases, an ActiveMQException is thrown to the caller because blocking sends provides reliability. The ActiveMQException thrown contains the name of the interceptor that returned false. As on the server, the client interceptor classes and their dependencies must be added to the Java classpath of the client to be properly instantiated and invoked. | [
"package com.example; import org.apache.activemq.artemis.protocol.amqp.broker.AMQPMessage; import org.apache.activemq.artemis.protocol.amqp.broker.AmqpInterceptor; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements AmqpInterceptor { private final int ACCEPTABLE_SIZE = 1024; @Override public boolean intercept(final AMQPMessage message, RemotingConnection connection) { int size = message.getEncodeSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This AMQPMessage has an acceptable size.\"); return true; } return false; } }",
"package com.example; import org.apache.artemis.activemq.api.core.Interceptor; import org.apache.activemq.artemis.core.protocol.core.Packet; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(Packet packet, RemotingConnection connection) throws ActiveMQException { int size = packet.getPacketSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This Packet has an acceptable size.\"); return true; } return false; } }",
"package com.example; import org.apache.activemq.artemis.core.protocol.mqtt.MQTTInterceptor; import io.netty.handler.codec.mqtt.MqttMessage; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(MqttMessage mqttMessage, RemotingConnection connection) throws ActiveMQException { byte[] msg = (mqttMessage.toString()).getBytes(); int size = msg.length; if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This MqttMessage has an acceptable size.\"); return true; } return false; } }",
"package com.example; import org.apache.activemq.artemis.core.protocol.stomp.StompFrameInterceptor; import org.apache.activemq.artemis.core.protocol.stomp.StompFrame; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(StompFrame stompFrame, RemotingConnection connection) throws ActiveMQException { int size = stompFrame.getEncodedSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This StompFrame has an acceptable size.\"); return true; } return false; } }",
"<configuration> <core> <remoting-incoming-interceptors> <class-name>org.example.MyIncomingInterceptor</class-name> </remoting-incoming-interceptors> </core> </configuration>",
"<configuration> <core> <remoting-outgoing-interceptors> <class-name>org.example.MyOutgoingInterceptor</class-name> </remoting-outgoing-interceptors> </core> </configuration>"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/configuring_amq_broker/interceptors |
6.14. Migrating Virtual Machines Between Hosts | 6.14. Migrating Virtual Machines Between Hosts Live migration provides the ability to move a running virtual machine between physical hosts with no interruption to service. The virtual machine remains powered on and user applications continue to run while the virtual machine is relocated to a new physical host. In the background, the virtual machine's RAM is copied from the source host to the destination host. Storage and network connectivity are not altered. Note A virtual machine that is using a vGPU cannot be migrated to a different host. 6.14.1. Live Migration Prerequisites Note This is one in a series of topics that show how to set up and configure SR-IOV on Red Hat Virtualization. For more information, see Setting Up and Configuring SR-IOV You can use live migration to seamlessly move virtual machines to support a number of common maintenance tasks. Your Red Hat Virtualization environment must be correctly configured to support live migration well in advance of using it. At a minimum, the following prerequisites must be met to enable successful live migration of virtual machines: The source and destination hosts are members of the same cluster, ensuring CPU compatibility between them. Note Live migrating virtual machines between different clusters is generally not recommended. The source and destination hosts' status is Up . The source and destination hosts have access to the same virtual networks and VLANs. The source and destination hosts have access to the data storage domain on which the virtual machine resides. The destination host has sufficient CPU capacity to support the virtual machine's requirements. The destination host has sufficient unused RAM to support the virtual machine's requirements. The migrating virtual machine does not have the cache!=none custom property set. Live migration is performed using the management network and involves transferring large amounts of data between hosts. Concurrent migrations have the potential to saturate the management network. For best performance, create separate logical networks for management, storage, display, and virtual machine data to minimize the risk of network saturation. 6.14.2. Configuring Virtual Machines with SR-IOV-Enabled vNICs to Reduce Network Outage during Migration Virtual machines with vNICs that are directly connected to a virtual function (VF) of an SR-IOV-enabled host NIC can be further configured to reduce network outage during live migration: Ensure that the destination host has an available VF. Set the Passthrough and Migratable options in the passthrough vNIC's profile. See Enabling Passthrough on a vNIC Profile in the Administration Guide . Enable hotplugging for the virtual machine's network interface. Ensure that the virtual machine has a backup VirtIO vNIC, in addition to the passthrough vNIC, to maintain the virtual machine's network connection during migration. Set the VirtIO vNIC's No Network Filter option before configuring the bond. See Explanation of Settings in the VM Interface Profile Window in the Administration Guide . Add both vNICs as slaves under an active-backup bond on the virtual machine, with the passthrough vNIC as the primary interface. The bond and vNIC profiles can be configured in one of the following ways: The bond is not configured with fail_over_mac=active and the VF vNIC is the primary slave (recommended). Disable the VirtIO vNIC profile's MAC-spoofing filter to ensure that traffic passing through the VirtIO vNIC is not dropped because it uses the VF vNIC MAC address. The bond is configured with fail_over_mac=active . This failover policy ensures that the MAC address of the bond is always the MAC address of the active slave. During failover, the virtual machine's MAC address changes, with a slight disruption in traffic. 6.14.3. Configuring Virtual Machines with SR-IOV-Enabled vNICs with minimal downtime To configure virtual machines for migration with SR-IOV enabled vNICs and minimal downtime follow the procedure described below. Note The following steps are provided only as a Technology Preview. For more information see Red Hat Technology Preview Features Support Scope . Create a vNIC profile with SR-IOV enabled vNICS. See Creating a vNIC profile and Setting up and configuring SR-IOV . In the Administration Portal, go to Network VNIC profiles , select the vNIC profile, click Edit and select a Failover vNIC profile from the drop down list. Click OK to save the profile settings. Hotplug a network interface with the failover vNIC profile you created into the virtual machine, or start a virtual machine with this network interface plugged in. Note The virtual machine has three network interfaces: a controller interface and two secondary interfaces. The controller interface must be active and connected in order for migration to succeed. For automatic deployment of virtual machines with this configuration, use the following udev rule: This udev rule works only on systems that manage interfaces with NetworkManager . This rule ensures that only the controller interface is activated. 6.14.4. Optimizing Live Migration Live virtual machine migration can be a resource-intensive operation. To optimize live migration, you can set the following two options globally for every virtual machine in an environment, for every virtual machine in a cluster, or for an individual virtual machine. Note The Auto Converge migrations and Enable migration compression options are available for cluster levels 4.2 or earlier. For cluster levels 4.3 or later, auto converge is enabled by default for all built-in migration policies, and migration compression is enabled by default for only the Suspend workload if needed migration policy. You can change these parameters when adding a new migration policy, or by modifying the MigrationPolicies configuration value. The Auto Converge migrations option allows you to set whether auto-convergence is used during live migration of virtual machines. Large virtual machines with high workloads can dirty memory more quickly than the transfer rate achieved during live migration, and prevent the migration from converging. Auto-convergence capabilities in QEMU allow you to force convergence of virtual machine migrations. QEMU automatically detects a lack of convergence and triggers a throttle-down of the vCPUs on the virtual machine. The Enable migration compression option allows you to set whether migration compression is used during live migration of the virtual machine. This feature uses Xor Binary Zero Run-Length-Encoding to reduce virtual machine downtime and total live migration time for virtual machines running memory write-intensive workloads or for any application with a sparse memory update pattern. Both options are disabled globally by default. Procedure Enable auto-convergence at the global level: # engine-config -s DefaultAutoConvergence=True Enable migration compression at the global level: # engine-config -s DefaultMigrationCompression=True Restart the ovirt-engine service to apply the changes: # systemctl restart ovirt-engine.service Configure the optimization settings for a cluster: Click Compute Clusters and select a cluster. Click Edit . Click the Migration Policy tab. From the Auto Converge migrations list, select Inherit from global setting , Auto Converge , or Don't Auto Converge . From the Enable migration compression list, select Inherit from global setting , Compress , or Don't Compress . Click OK . Configure the optimization settings at the virtual machine level: Click Compute Virtual Machines and select a virtual machine. Click Edit . Click the Host tab. From the Auto Converge migrations list, select Inherit from cluster setting , Auto Converge , or Don't Auto Converge . From the Enable migration compression list, select Inherit from cluster setting , Compress , or Don't Compress . Click OK . 6.14.5. Guest Agent Hooks Hooks are scripts that trigger activity within a virtual machine when key events occur: Before migration After migration Before hibernation After hibernation The hooks configuration base directory is /etc/ovirt-guest-agent/hooks.d on Linux systems. Each event has a corresponding subdirectory: before_migration and after_migration , before_hibernation and after_hibernation . All files or symbolic links in that directory will be executed. The executing user on Linux systems is ovirtagent . If the script needs root permissions, the elevation must be executed by the creator of the hook script. 6.14.6. Automatic Virtual Machine Migration Red Hat Virtualization Manager automatically initiates live migration of all virtual machines running on a host when the host is moved into maintenance mode. The destination host for each virtual machine is assessed as the virtual machine is migrated, in order to spread the load across the cluster. From version 4.3, all virtual machines defined with manual or automatic migration modes are migrated when the host is moved into maintenance mode. However, for high performance and/or pinned virtual machines, a Maintenance Host window is displayed, asking you to confirm the action because the performance on the target host may be less than the performance on the current host. The Manager automatically initiates live migration of virtual machines in order to maintain load-balancing or power-saving levels in line with scheduling policy. Specify the scheduling policy that best suits the needs of your environment. You can also disable automatic, or even manual, live migration of specific virtual machines where required. If your virtual machines are configured for high performance, and/or if they have been pinned (by setting Passthrough Host CPU, CPU Pinning, or NUMA Pinning), the migration mode is set to Allow manual migration only . However, this can be changed to Allow Manual and Automatic mode if required. Special care should be taken when changing the default migration setting so that it does not result in a virtual machine migrating to a host that does not support high performance or pinning. 6.14.7. Preventing Automatic Migration of a Virtual Machine Red Hat Virtualization Manager allows you to disable automatic migration of virtual machines. You can also disable manual migration of virtual machines by setting the virtual machine to run only on a specific host. The ability to disable automatic migration and require a virtual machine to run on a particular host is useful when using application high availability products, such as Red Hat High Availability or Cluster Suite. Preventing Automatic Migration of Virtual Machines Click Compute Virtual Machines and select a virtual machine. Click Edit . Click the Host tab. In the Start Running On section, select Any Host in Cluster or Specific Host(s) , which enables you to select multiple hosts. Warning Explicitly assigning a virtual machine to a specific host and disabling migration are mutually exclusive with Red Hat Virtualization high availability. Important If the virtual machine has host devices directly attached to it, and a different host is specified, the host devices from the host will be automatically removed from the virtual machine. Select Allow manual migration only or Do not allow migration from the Migration Options drop-down list. Click OK . 6.14.8. Manually Migrating Virtual Machines A running virtual machine can be live migrated to any host within its designated host cluster. Live migration of virtual machines does not cause any service interruption. Migrating virtual machines to a different host is especially useful if the load on a particular host is too high. For live migration prerequisites, see Live migration prerequisites . For high performance virtual machines and/or virtual machines defined with Pass-Through Host CPU , CPU Pinning , or NUMA Pinning , the default migration mode is Manual . Select Select Host Automatically so that the virtual machine migrates to the host that offers the best performance. Note When you place a host into maintenance mode, the virtual machines running on that host are automatically migrated to other hosts in the same cluster. You do not need to manually migrate these virtual machines. Note Live migrating virtual machines between different clusters is generally not recommended. Procedure Click Compute Virtual Machines and select a running virtual machine. Click Migrate . Use the radio buttons to select whether to Select Host Automatically or to Select Destination Host , specifying the host using the drop-down list. Note When the Select Host Automatically option is selected, the system determines the host to which the virtual machine is migrated according to the load balancing and power management rules set up in the scheduling policy. Click OK . During migration, progress is shown in the Migration progress bar. Once migration is complete the Host column will update to display the host the virtual machine has been migrated to. 6.14.9. Setting Migration Priority Red Hat Virtualization Manager queues concurrent requests for migration of virtual machines off of a given host. The load balancing process runs every minute. Hosts already involved in a migration event are not included in the migration cycle until their migration event has completed. When there is a migration request in the queue and available hosts in the cluster to action it, a migration event is triggered in line with the load balancing policy for the cluster. You can influence the ordering of the migration queue by setting the priority of each virtual machine; for example, setting mission critical virtual machines to migrate before others. Migrations will be ordered by priority; virtual machines with the highest priority will be migrated first. Setting Migration Priority Click Compute Virtual Machines and select a virtual machine. Click Edit . Select the High Availability tab. Select Low , Medium , or High from the Priority drop-down list. Click OK . 6.14.10. Canceling Ongoing Virtual Machine Migrations A virtual machine migration is taking longer than you expected. You'd like to be sure where all virtual machines are running before you make any changes to your environment. Procedure Select the migrating virtual machine. It is displayed in Compute Virtual Machines with a status of Migrating from . Click More Actions ( ), then click Cancel Migration . The virtual machine status returns from Migrating from to Up . 6.14.11. Event and Log Notification upon Automatic Migration of Highly Available Virtual Servers When a virtual server is automatically migrated because of the high availability function, the details of an automatic migration are documented in the Events tab and in the engine log to aid in troubleshooting, as illustrated in the following examples: Example 6.4. Notification in the Events Tab of the Administration Portal Highly Available Virtual_Machine_Name failed. It will be restarted automatically. Virtual_Machine_Name was restarted on Host Host_Name Example 6.5. Notification in the Manager engine.log This log can be found on the Red Hat Virtualization Manager at /var/log/ovirt-engine/engine.log : Failed to start Highly Available VM. Attempting to restart. VM Name: Virtual_Machine_Name , VM Id:_Virtual_Machine_ID_Number_ | [
"UBSYSTEM==\"net\", ACTION==\"add|change\", ENV{ID_NET_DRIVER}!=\"net_failover\", ENV{NM_UNMANAGED}=\"1\", RUN+=\"/bin/sh -c '/sbin/ip link set up USDINTERFACE'\"",
"engine-config -s DefaultAutoConvergence=True",
"engine-config -s DefaultMigrationCompression=True",
"systemctl restart ovirt-engine.service"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/sect-migrating_virtual_machines_between_hosts |
probe::scheduler.process_wait | probe::scheduler.process_wait Name probe::scheduler.process_wait - Scheduler starting to wait on a process Synopsis scheduler.process_wait Values name name of the probe point pid PID of the process scheduler is waiting on | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-scheduler-process-wait |
Chapter 7. Installing automation hub | Chapter 7. Installing automation hub With the installation of the Ansible Automation Platform operator completed, the following steps install automation hub within a Red Hat OpenShift cluster. Note The resource requests and limits values are specific to this reference environment. Ensure to read the Chapter 3, Before you start section to properly calculate the values for your Red Hat OpenShift environment. Warning When an instance of automation hub is removed, the associated Persistent Volume Claims (PVCs) are not automatically deleted. This can cause issues during migration if the new deployment has the same name as the deployment. It is recommended to remove old PVCs prior to deploying a new automation hub instance in the same namespace. The steps to remove deployment PVCs can be found within Appendix B, Delete existing PVCs from AAP installations . Note Automation hub requires ReadWriteMany file-based storage, Azure Blob storage or Amazon S3-compliant storage for operation to ensure multiple pods can access shared content, such as collections. Log in to the Red Hat OpenShift web console using your cluster credentials. In the left-hand navigation menu, select Operators Installed Operators , select Ansible Automation Platform . Navigate to the Automation Hub tab, then click Create AutomationHub . Within the Form view provide a Name , e.g. my-automation-hub Within the Storage type , select your ReadWriteMany compliant storage. Note This reference environment uses Amazon S3 as its ReadWriteMany storage. Details to create an Amazon S3 bucket can be found in Appendix D, Create an Amazon S3 bucket . Provide S3 storage secret . Details on how to create within Appendix E, Creating an AWS S3 Secret . Select the Advanced configuration to expand the additional options. Within PostgreSQL container storage requirements (when using a managed instance) set storage limit to 50Gi set storage requests to 8Gi Within PostgreSQL container resource requirements (when using a managed instance) Limits: CPU cores: 500m, Memory: 1Gi Requests: CPU cores: 200m, Memory: 1Gi Within Redis deployment configuration , select Advanced configuration Select In-memory data store resource requirements Limits: CPU cores: 250m, Memory: 200Mi Requests: CPU cores: 100m, Memory: 200Mi Within API server configuration , select Advanced configuration Select API server resource requirements Limits: CPU cores: 250m, Memory: 400Mi Requests: CPU cores: 150m, Memory: 400Mi Within Content server configuration , select Advanced configuration Select Content server resource requirements Limits: CPU cores: 250m, Memory: 400Mi Requests: CPU cores: 100m, Memory: 400Mi Within Worker configuration , select Advanced configuration Select Worker resource requirements Limits: CPU cores: 1000m, Memory: 3Gi Requests: CPU cores: 500m, Memory: 3Gi Click the Create button | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/deploying_ansible_automation_platform_2_on_red_hat_openshift/install_ahub |
Chapter 20. OperatorHub [config.openshift.io/v1] | Chapter 20. OperatorHub [config.openshift.io/v1] Description OperatorHub is the Schema for the operatorhubs API. It can be used to change the state of the default hub sources for OperatorHub on the cluster from enabled to disabled and vice versa. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 20.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OperatorHubSpec defines the desired state of OperatorHub status object OperatorHubStatus defines the observed state of OperatorHub. The current state of the default hub sources will always be reflected here. 20.1.1. .spec Description OperatorHubSpec defines the desired state of OperatorHub Type object Property Type Description disableAllDefaultSources boolean disableAllDefaultSources allows you to disable all the default hub sources. If this is true, a specific entry in sources can be used to enable a default source. If this is false, a specific entry in sources can be used to disable or enable a default source. sources array sources is the list of default hub sources and their configuration. If the list is empty, it implies that the default hub sources are enabled on the cluster unless disableAllDefaultSources is true. If disableAllDefaultSources is true and sources is not empty, the configuration present in sources will take precedence. The list of default hub sources and their current state will always be reflected in the status block. sources[] object HubSource is used to specify the hub source and its configuration 20.1.2. .spec.sources Description sources is the list of default hub sources and their configuration. If the list is empty, it implies that the default hub sources are enabled on the cluster unless disableAllDefaultSources is true. If disableAllDefaultSources is true and sources is not empty, the configuration present in sources will take precedence. The list of default hub sources and their current state will always be reflected in the status block. Type array 20.1.3. .spec.sources[] Description HubSource is used to specify the hub source and its configuration Type object Property Type Description disabled boolean disabled is used to disable a default hub source on cluster name string name is the name of one of the default hub sources 20.1.4. .status Description OperatorHubStatus defines the observed state of OperatorHub. The current state of the default hub sources will always be reflected here. Type object Property Type Description sources array sources encapsulates the result of applying the configuration for each hub source sources[] object HubSourceStatus is used to reflect the current state of applying the configuration to a default source 20.1.5. .status.sources Description sources encapsulates the result of applying the configuration for each hub source Type array 20.1.6. .status.sources[] Description HubSourceStatus is used to reflect the current state of applying the configuration to a default source Type object Property Type Description disabled boolean disabled is used to disable a default hub source on cluster message string message provides more information regarding failures name string name is the name of one of the default hub sources status string status indicates success or failure in applying the configuration 20.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/operatorhubs DELETE : delete collection of OperatorHub GET : list objects of kind OperatorHub POST : create an OperatorHub /apis/config.openshift.io/v1/operatorhubs/{name} DELETE : delete an OperatorHub GET : read the specified OperatorHub PATCH : partially update the specified OperatorHub PUT : replace the specified OperatorHub /apis/config.openshift.io/v1/operatorhubs/{name}/status GET : read status of the specified OperatorHub PATCH : partially update status of the specified OperatorHub PUT : replace status of the specified OperatorHub 20.2.1. /apis/config.openshift.io/v1/operatorhubs HTTP method DELETE Description delete collection of OperatorHub Table 20.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OperatorHub Table 20.2. HTTP responses HTTP code Reponse body 200 - OK OperatorHubList schema 401 - Unauthorized Empty HTTP method POST Description create an OperatorHub Table 20.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.4. Body parameters Parameter Type Description body OperatorHub schema Table 20.5. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 201 - Created OperatorHub schema 202 - Accepted OperatorHub schema 401 - Unauthorized Empty 20.2.2. /apis/config.openshift.io/v1/operatorhubs/{name} Table 20.6. Global path parameters Parameter Type Description name string name of the OperatorHub HTTP method DELETE Description delete an OperatorHub Table 20.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 20.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OperatorHub Table 20.9. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OperatorHub Table 20.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.11. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OperatorHub Table 20.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.13. Body parameters Parameter Type Description body OperatorHub schema Table 20.14. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 201 - Created OperatorHub schema 401 - Unauthorized Empty 20.2.3. /apis/config.openshift.io/v1/operatorhubs/{name}/status Table 20.15. Global path parameters Parameter Type Description name string name of the OperatorHub HTTP method GET Description read status of the specified OperatorHub Table 20.16. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OperatorHub Table 20.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.18. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OperatorHub Table 20.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.20. Body parameters Parameter Type Description body OperatorHub schema Table 20.21. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 201 - Created OperatorHub schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/config_apis/operatorhub-config-openshift-io-v1 |
Chapter 1. Upgrading ROSA with HCP clusters | Chapter 1. Upgrading ROSA with HCP clusters 1.1. Upgrade options for ROSA with HCP clusters In OpenShift, upgrading means provisioning a new component with updated software and using it to replace an existing component that has outdated software. You can control the impact of upgrades to your workload by controlling which parts of the cluster are upgraded, for example: Upgrade only the hosted control plane This initiates upgrade of the hosted control plane. It does not impact your worker nodes. Upgrade nodes in a machine pool This initiates a rolling replacement of nodes in the specified machine pool, and temporarily impacts the worker nodes on that machine pool. You can also upgrade multiple machine pools concurrently. Important You cannot upgrade the hosted control plane at the same time as any machine pool upgrade. Important To maintain compatibility between nodes in the cluster, nodes in machine pools cannot use a newer version than the hosted control plane. This means that the hosted control plane should always be upgraded to a given version before any machine pools are upgraded to the same version. You can further control the time required for a machine pool upgrade, and the impact of an upgrade to your workload, by editing the --max-surge and --max-unavailable values for each machine pool. These options control the number of nodes that can be upgraded simultaneously on a machine pool, and whether an upgrade provisions excess nodes or makes some existing nodes unavailable or both, for example: To prioritize high workload availability , you can provision excess nodes instead of making existing nodes unavailable by setting a higher value for --max-surge and setting --max-unavailable to 0 . To prioritize lower infrastructure costs , you can make some existing nodes unavailable and avoid provisioning excess nodes by setting a higher value for --max-unavailable and setting --max-surge to 0 . To prioritize upgrade speed by upgrading multiple nodes simultaneously , you can provision excess nodes and allow some existing nodes to be made unavailable by configuring moderate values for both --max-surge and --max-unavailable . For more information about these parameters and their usage, see the ROSA CLI reference for rosa edit machinepool . Additional resources ROSA CLI reference: rosa edit machinepool 1.2. Life cycle policies and planning To plan an upgrade, review the Red Hat OpenShift Service on AWS update life cycle . The life cycle page includes release definitions, support and upgrade requirements, installation policy information and life cycle dates. Upgrades are manually initiated or automatically scheduled. Red Hat Site Reliability Engineers (SREs) monitor upgrade progress and remedy any issues encountered. Note If your control plane is not currently multi-architecture enabled, the upgrade process will first migrate the cluster to a multi-architecture image and then apply the version upgrade. Multi-architecture clusters are capable of running both x86-based and Arm-based workloads. Clusters created after 25 July, 2024 are multi-architecture enabled by default. 1.3. Upgrading the hosted control plane with the ROSA CLI You can manually upgrade the hosted control plane of a ROSA with HCP cluster by using the ROSA CLI. This method schedules the control plane for an upgrade if a more recent version is available, either immediately, or at a specified future time. Note Your control plane only supports machine pools within two minor Y-stream versions. For example, a ROSA with HCP cluster with a control plane using version 4.15.z supports machine pools with version 4.13.z and 4.14.z, but the control plane does not support machine pools using version 4.12.z. Prerequisites You have installed and configured the latest version of the ROSA CLI. No machine pool upgrades are in progress or scheduled to take place at the same time as the hosted control plane upgrade. Procedure Verify the current version of your cluster by running the following command: USD rosa describe cluster --cluster=<cluster_name_or_id> 1 1 Replace <cluster_name_or_id> with the cluster name or the cluster ID. List the versions that you can upgrade your control plane to by running the following command: USD rosa list upgrade --cluster=<cluster_name_or_id> The command returns a list of available updates, including the recommended version. Example output VERSION NOTES 4.14.8 recommended 4.14.7 4.14.6 Upgrade the cluster's hosted control plane by running the following command: USD rosa upgrade cluster -c <cluster_name_or_id> --control-plane [--schedule-date=<yyyy-mm-dd> --schedule-time=<HH:mm>] --version <version_number> To schedule an immediate upgrade to the specified version, run the following command: USD rosa upgrade cluster -c <cluster_name_or_id> --control-plane --version <version_number> Your hosted control plane is scheduled for an immediate upgrade. To schedule an upgrade to the specified version at a future date, run the following command: USD rosa upgrade cluster -c <cluster_name_or_id> --control-plane --schedule-date=<yyyy-mm-dd> --schedule-time=<HH:mm> --version=<version_number> Your hosted control plane is scheduled for an upgrade at the specified time in Coordinated Universal Time (UTC). Troubleshooting Sometimes a scheduled upgrade does not initiate. See Upgrade maintenance canceled for more information. 1.4. Upgrading machine pools with the ROSA CLI You can manually upgrade one or more machine pools in a ROSA with HCP cluster by using the ROSA CLI. This method schedules the specified machine pool for an upgrade if a more recent version is available, either immediately, or at a specified future time. Note Your control plane only supports machine pools within two minor Y-stream versions. For example, a ROSA with HCP cluster with a control plane using version 4.15.z supports machine pools with version 4.13.z and 4.14.z, but the control plane does not support machine pools using version 4.12.z. Prerequisites You have installed and configured the latest version of the ROSA CLI. No upgrades for the hosted control plane are in progress on the cluster, or scheduled to occur at the same time as the machine pool upgrade. Note Machine pool configurations such as node drain timeout, max-unavailable, and max-surge can affect the timing and success of upgrades. Procedure Verify the current version of your cluster by running the following command: USD rosa describe cluster --cluster=<cluster_name_or_id> 1 1 Replace <cluster_name_or_id> with the cluster name or the cluster ID. Example output OpenShift Version: 4.14.0 List the versions that you can upgrade your machine pools to by running the following command: USD rosa list upgrade --cluster <cluster-name> --machinepool <machinepool_name> The command returns a list of available updates, including the recommended version. Example output VERSION NOTES 4.14.5 recommended 4.14.4 4.14.3 Important Do not upgrade your machine pool to a version higher than your control plane. If you want to move to a higher version, upgrade the control plane to that version first. Verify the upgrade behavior of the machine pools you intend to upgrade by running the following command: USD rosa describe machinepool --cluster=<cluster_name_or_id> <machinepool_name> Example output Replicas: 5 Node drain grace period: 30 minutes Management upgrade: - Type: Replace - Max surge: 20% - Max unavailable: 20% In the example, these settings allow the machine pool to provision one excess node ( max-surge of 20% of replicas ) and to have up to one node unavailable ( max-unavailable of 20% of replicas ) during an upgrade. This machine pool can therefore upgrade two nodes at a time, by provisioning one new node in excess of the replica count, and by making one node unavailable and replacing it. Node upgrades may be delayed by up to 30 minutes ( node-drain-grace-period of 30 minutes) if necessary to protect workloads that have a pod disruption budget. Upgrade a machine pool by running the following command: USD rosa upgrade machinepool -c <cluster_name> <machinepool_name> [--schedule-date=<yyyy-mm-dd> --schedule-time=<HH:mm>] --version <version_number> You can upgrade multiple machine pools concurrently by running this command for each machine pool you want to upgrade. To schedule the immediate upgrade of a machine pool, run the following command: USD rosa upgrade machinepool -c <cluster_name> <machinepool_name> --version <version_number> The machine pool is scheduled for immediate upgrade, which initiates a rolling replacement of all nodes in the specified machine pool. To schedule an upgrade to start at a future time, run the following command: USD rosa upgrade machinepool -c <cluster_name> <machinepool_name> --schedule-date=<yyyy-mm-dd> --schedule-time=<HH:mm> --version <version_number> The machine pool is scheduled to begin an upgrade at the specified time and date in Coordinated Universal Time (UTC). This will initiate a rolling replacement of all nodes in the specified machine pool, beginning at the specified time. | [
"rosa describe cluster --cluster=<cluster_name_or_id> 1",
"rosa list upgrade --cluster=<cluster_name_or_id>",
"VERSION NOTES 4.14.8 recommended 4.14.7 4.14.6",
"rosa upgrade cluster -c <cluster_name_or_id> --control-plane [--schedule-date=<yyyy-mm-dd> --schedule-time=<HH:mm>] --version <version_number>",
"rosa upgrade cluster -c <cluster_name_or_id> --control-plane --version <version_number>",
"rosa upgrade cluster -c <cluster_name_or_id> --control-plane --schedule-date=<yyyy-mm-dd> --schedule-time=<HH:mm> --version=<version_number>",
"rosa describe cluster --cluster=<cluster_name_or_id> 1",
"OpenShift Version: 4.14.0",
"rosa list upgrade --cluster <cluster-name> --machinepool <machinepool_name>",
"VERSION NOTES 4.14.5 recommended 4.14.4 4.14.3",
"rosa describe machinepool --cluster=<cluster_name_or_id> <machinepool_name>",
"Replicas: 5 Node drain grace period: 30 minutes Management upgrade: - Type: Replace - Max surge: 20% - Max unavailable: 20%",
"rosa upgrade machinepool -c <cluster_name> <machinepool_name> [--schedule-date=<yyyy-mm-dd> --schedule-time=<HH:mm>] --version <version_number>",
"rosa upgrade machinepool -c <cluster_name> <machinepool_name> --version <version_number>",
"rosa upgrade machinepool -c <cluster_name> <machinepool_name> --schedule-date=<yyyy-mm-dd> --schedule-time=<HH:mm> --version <version_number>"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/upgrading/rosa-hcp-upgrading |
Chapter 9. Managing Red Hat High Availability Add-On With Command Line Tools | Chapter 9. Managing Red Hat High Availability Add-On With Command Line Tools This chapter describes various administrative tasks for managing Red Hat High Availability Add-On and consists of the following sections: Section 9.1, "Starting and Stopping the Cluster Software" Section 9.2, "Deleting or Adding a Node" Section 9.3, "Managing High-Availability Services" Section 9.4, "Updating a Configuration" Important Make sure that your deployment of Red Hat High Availability Add-On meets your needs and can be supported. Consult with an authorized Red Hat representative to verify your configuration prior to deployment. In addition, allow time for a configuration burn-in period to test failure modes. Important This chapter references commonly used cluster.conf elements and attributes. For a comprehensive list and description of cluster.conf elements and attributes, see the cluster schema at /usr/share/cluster/cluster.rng , and the annotated schema at /usr/share/doc/cman-X.Y.ZZ/cluster_conf.html (for example /usr/share/doc/cman-3.0.12/cluster_conf.html ). Important Certain procedure in this chapter call for using the cman_tool version -r command to propagate a cluster configuration throughout a cluster. Using that command requires that ricci is running. Note Procedures in this chapter, may include specific commands for some of the command-line tools listed in Appendix E, Command Line Tools Summary . For more information about all commands and variables, see the man page for each command-line tool. 9.1. Starting and Stopping the Cluster Software You can start or stop cluster software on a node according to Section 9.1.1, "Starting Cluster Software" and Section 9.1.2, "Stopping Cluster Software" . Starting cluster software on a node causes it to join the cluster; stopping the cluster software on a node causes it to leave the cluster. 9.1.1. Starting Cluster Software To start the cluster software on a node, type the following commands in this order: service cman start service clvmd start , if CLVM has been used to create clustered volumes service gfs2 start , if you are using Red Hat GFS2 service rgmanager start , if you using high-availability (HA) services ( rgmanager ). For example: 9.1.2. Stopping Cluster Software To stop the cluster software on a node, type the following commands in this order: service rgmanager stop , if you using high-availability (HA) services ( rgmanager ). service gfs2 stop , if you are using Red Hat GFS2 umount -at gfs2 , if you are using Red Hat GFS2 in conjunction with rgmanager , to ensure that any GFS2 files mounted during rgmanager startup (but not unmounted during shutdown) were also unmounted. service clvmd stop , if CLVM has been used to create clustered volumes service cman stop For example: Note Stopping cluster software on a node causes its HA services to fail over to another node. As an alternative to that, consider relocating or migrating HA services to another node before stopping cluster software. For information about managing HA services, see Section 9.3, "Managing High-Availability Services" . | [
"service cman start Starting cluster: Checking Network Manager... [ OK ] Global setup... [ OK ] Loading kernel modules... [ OK ] Mounting configfs... [ OK ] Starting cman... [ OK ] Waiting for quorum... [ OK ] Starting fenced... [ OK ] Starting dlm_controld... [ OK ] Starting gfs_controld... [ OK ] Unfencing self... [ OK ] Joining fence domain... [ OK ] service clvmd start Starting clvmd: [ OK ] Activating VG(s): 2 logical volume(s) in volume group \"vg_example\" now active [ OK ] service gfs2 start Mounting GFS2 filesystem (/mnt/gfsA): [ OK ] Mounting GFS2 filesystem (/mnt/gfsB): [ OK ] service rgmanager start Starting Cluster Service Manager: [ OK ]",
"service rgmanager stop Stopping Cluster Service Manager: [ OK ] service gfs2 stop Unmounting GFS2 filesystem (/mnt/gfsA): [ OK ] Unmounting GFS2 filesystem (/mnt/gfsB): [ OK ] umount -at gfs2 service clvmd stop Signaling clvmd to exit [ OK ] clvmd terminated [ OK ] service cman stop Stopping cluster: Leaving fence domain... [ OK ] Stopping gfs_controld... [ OK ] Stopping dlm_controld... [ OK ] Stopping fenced... [ OK ] Stopping cman... [ OK ] Waiting for corosync to shutdown: [ OK ] Unloading kernel modules... [ OK ] Unmounting configfs... [ OK ]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/ch-mgmt-cli-CA |
20.2. Configuring Network Encryption for a New Trusted Storage Pool | 20.2. Configuring Network Encryption for a New Trusted Storage Pool Follow this section to configure I/O and management encryption on a freshly installed Red Hat Gluster Storage deployment that does not yet have a trusted storage pool configured. 20.2.1. Enabling Management Encryption Red Hat recommends enabling both management and I/O encryption, but if you only want to use I/O encryption, you can skip this section and continue with Section 20.2.2, "Enabling I/O Encryption" . Procedure 20.3. Enabling management encryption on servers Perform the following steps on all servers. Create and edit the secure-access file Create a new /var/lib/glusterd/secure-access file. This file can be empty if you are using the default settings. Your Certificate Authority may require changes to the SSL certificate depth setting, transport.socket.ssl-cert-depth , in order to work correctly. To edit this setting, add the following line to the secure-access file, replacing n with the certificate depth required by your Certificate Authority. Start glusterd On Red Hat Enterprise Linux 7 based servers, run: On Red Hat Enterprise Linux 6 based servers, run: Important Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide Continue storage configuration Proceed with the normal configuration process by setting up the trusted storage pool, formatting bricks, and creating volumes. For more information, see Chapter 4, Adding Servers to the Trusted Storage Pool and Chapter 5, Setting Up Storage Volumes . Procedure 20.4. Enabling management encryption on clients Prerequisites You must have configured a trusted storage pool, bricks, and volumes before following this process. For more information, see Chapter 4, Adding Servers to the Trusted Storage Pool and Chapter 5, Setting Up Storage Volumes . Perform the following steps on all clients. Create and edit the secure-access file Create the /var/lib/glusterd directory, and create a new /var/lib/glusterd/secure-access file. This file can be empty if you are using the default settings. Your Certificate Authority may require changes to the SSL certificate depth setting, transport.socket.ssl-cert-depth , in order to work correctly. To edit this setting, add the following line to the secure-access file, replacing n with the certificate depth required by your Certificate Authority. Start the volume On the server, start the volume. Mount the volume The process for mounting a volume depends on the protocol your client is using. The following command mounts a volume called testvol using the native FUSE protocol. 20.2.2. Enabling I/O Encryption Follow this section to enable I/O encryption between servers and clients. Procedure 20.5. Enabling I/O encryption Prerequisites You must have volumes configured, but not started, to perform this process. See Chapter 5, Setting Up Storage Volumes for information on creating volumes. To stop a volume, run the following command: Run the following commands from any Gluster server. Specify servers and clients to allow Provide a list of the common names of servers and clients that are allowed to access the volume. The common names provided must be exactly the same as the common name specified when you created the glusterfs.pem file for that server or client. This provides an additional check in case you want to leave keys in place, but temporarily restrict a client or server by removing it from this list, as shown in Section 20.7, "Deauthorizing a Client" . You can also use the default value of * , which indicates that any TLS authenticated machine can mount and access the volume. Enable TLS/SSL on the volume Start the volume Verify Verify that the volume can be mounted on authorized clients, and that the volume cannot be mounted by unauthorized clients. The process for mounting a volume depends on the protocol your client is using. The process for mounting a volume depends on the protocol your client is using. The following command mounts a volume called testvol using the native FUSE protocol. | [
"touch /var/lib/glusterd/secure-access",
"echo \"option transport.socket.ssl-cert-depth n \" > /var/lib/glusterd/secure-access",
"systemctl start glusterd",
"service glusterd start",
"touch /var/lib/glusterd/secure-access",
"echo \"option transport.socket.ssl-cert-depth n \" > /var/lib/glusterd/secure-access",
"gluster volume start volname",
"mount -t glusterfs server1:testvol /mnt/glusterfs",
"gluster volume stop volname",
"gluster volume set volname auth.ssl-allow ' server1 , server2 , client1 , client2 , client3 '",
"gluster volume set volname client.ssl on gluster volume set volname server.ssl on",
"gluster volume start volname",
"mount -t glusterfs server1:testvol /mnt/glusterfs"
] | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-network_encryption-new_pool |
Chapter 7. CentralHealthService | Chapter 7. CentralHealthService 7.1. GetUpgradeStatus GET /v1/centralhealth/upgradestatus 7.1.1. Description 7.1.2. Parameters 7.1.3. Return Type V1GetUpgradeStatusResponse 7.1.4. Content Type application/json 7.1.5. Responses Table 7.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetUpgradeStatusResponse 0 An unexpected error response. RuntimeError 7.1.6. Samples 7.1.7. Common object reference 7.1.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 7.1.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 7.1.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 7.1.7.3. V1CentralUpgradeStatus Field Name Required Nullable Type Description Format version String forceRollbackTo String The version of clone in Central. This is the version we can force rollback to. canRollbackAfterUpgrade Boolean If true, we can rollback to the current version if an upgrade failed. spaceRequiredForRollbackAfterUpgrade String int64 spaceAvailableForRollbackAfterUpgrade String int64 7.1.7.4. V1GetUpgradeStatusResponse Field Name Required Nullable Type Description Format upgradeStatus V1CentralUpgradeStatus | [
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/centralhealthservice |
Chapter 4. HOME: Checking connected clusters | Chapter 4. HOME: Checking connected clusters The homepage offers a snapshot of connected Kafka clusters, providing information on the Kafka version and associated project for each cluster. To find more information, log in to a cluster. 4.1. Logging in to a Kafka cluster The console supports authenticated user login to a Kafka cluster using SCRAM-SHA-512 and OAuth 2.0 authentication mechanisms. For secure login, authentication must be configured in Streams for Apache Kafka. Note If authentication is not set up for a Kafka cluster or the credentials have been provided using the Kafka sasl.jaas.config property (which defines SASL authentication settings) in the console configuration, you can log in anonymously to the cluster without authentication. Prerequisites You must have access to an OpenShift Container Platform cluster. The console must be deployed and set up to connect to a Kafka cluster . For secure login, you must have appropriate authentication settings for the Kafka cluster and user. SCRAM-SHA-512 settings Listener authentication set to scram-sha-512 in Kafka.spec.kafka.listeners[*].authentication . Username and password configured in KafkaUser.spec.authentication . OAuth 2.0 settings An OAuth 2.0 authorization server with client definitions for the Kafka cluster and users. Listener authentication set to oauth in Kafka.spec.kafka.listeners[*].authentication . For more information on configuring authentication, see the Streams for Apache Kafka documentation . Procedure From the homepage, click Login to cluster for a selected Kafka cluster. Enter login credentials depending on the authentication method used. For SCRAM-SHA-512, enter the username and password associated with the KafkaUser . For OAuth 2.0, provide a client ID and client secret that is valid for the OAuth provider configured for the Kafka listener. To end your session, click your username and then Logout or navigate back to the homepage. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_the_streams_for_apache_kafka_console/con-homepage-checking-connected-users-str |
Logging | Logging OpenShift Container Platform 4.17 Configuring and using logging in OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.",
"oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')",
"tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408",
"__error__ JSONParserErr __error_details__ Value looks like object, but can't find closing '}' symbol",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc create sa collector -n openshift-logging",
"oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector",
"oc project openshift-logging",
"oc adm policy add-cluster-role-to-user collect-application-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-audit-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector",
"apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: authentication: token: from: serviceAccount target: name: logging-loki namespace: openshift-logging tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc create sa collector -n openshift-logging",
"oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector",
"oc project openshift-logging",
"oc adm policy add-cluster-role-to-user collect-application-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-audit-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector",
"apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging annotations: observability.openshift.io/tech-preview-otlp-output: \"enabled\" 1 spec: serviceAccount: name: collector outputs: - name: loki-otlp type: lokiStack 2 lokiStack: target: name: logging-loki namespace: openshift-logging dataModel: Otel 3 authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: my-pipeline inputRefs: - application - infrastructure outputRefs: - loki-otlp",
"apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: upgradeStrategy: Default",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable-6.<y> 2 installPlanApproval: Automatic 3 name: loki-operator source: redhat-operators 4 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 1 namespace: openshift-logging stringData: 2 access_key_id: <access_key_id> access_key_secret: <access_secret> bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1",
"oc apply -f <filename>.yaml",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: \"<yyyy>-<mm>-<dd>\" 4 secret: name: logging-loki-s3 5 type: s3 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8",
"oc apply -f <filename>.yaml",
"oc get pods -n openshift-logging",
"oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE logging-loki-compactor-0 1/1 Running 0 42m logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m logging-loki-index-gateway-0 1/1 Running 0 42m logging-loki-ingester-0 1/1 Running 0 42m logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: upgradeStrategy: Default",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable-6.<y> 2 installPlanApproval: Automatic 3 name: cluster-logging source: redhat-operators 4 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"oc create sa logging-collector -n openshift-logging",
"oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logging-collector -n openshift-logging oc adm policy add-cluster-role-to-user collect-application-logs -z logging-collector -n openshift-logging oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logging-collector -n openshift-logging",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging 1 spec: serviceAccount: name: logging-collector 2 outputs: - name: lokistack-out type: lokiStack 3 lokiStack: target: 4 name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: infra-app-logs inputRefs: 5 - application - infrastructure outputRefs: - lokistack-out",
"oc apply -f <filename>.yaml",
"oc get pods -n openshift-logging",
"oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m instance-222js 2/2 Running 0 18m instance-g9ddv 2/2 Running 0 18m instance-hfqq8 2/2 Running 0 18m instance-sphwg 2/2 Running 0 18m instance-vv7zn 2/2 Running 0 18m instance-wk5zz 2/2 Running 0 18m logging-loki-compactor-0 1/1 Running 0 42m logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m logging-loki-index-gateway-0 1/1 Running 0 42m logging-loki-ingester-0 1/1 Running 0 42m logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m",
"apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 labels: openshift.io/cluster-monitoring: \"true\" 2",
"apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 1 namespace: openshift-logging 2 stringData: 3 access_key_id: <access_key_id> access_key_secret: <access_key> bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: \"<yyyy>-<mm>-<dd>\" secret: name: logging-loki-s3 4 type: s3 5 storageClassName: <storage_class_name> 6 tenants: mode: openshift-logging 7",
"apiVersion: v1 kind: ServiceAccount metadata: name: logging-collector 1 namespace: openshift-logging 2",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logging-collector:write-logs roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: logging-collector-logs-writer 1 subjects: - kind: ServiceAccount name: logging-collector namespace: openshift-logging --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logging-collector:collect-application roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: collect-application-logs 2 subjects: - kind: ServiceAccount name: logging-collector namespace: openshift-logging --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logging-collector:collect-infrastructure roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: collect-infrastructure-logs 3 subjects: - kind: ServiceAccount name: logging-collector namespace: openshift-logging",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging 1 spec: serviceAccount: name: logging-collector 2 outputs: - name: lokistack-out type: lokiStack 3 lokiStack: target: 4 name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: infra-app-logs inputRefs: 5 - application - infrastructure outputRefs: - lokistack-out",
"oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: manager-rolebinding roleRef: 1 apiGroup: rbac.authorization.k8s.io 2 kind: ClusterRole 3 name: cluster-logging-operator 4 subjects: 5 - kind: ServiceAccount 6 name: cluster-logging-operator 7 namespace: openshift-logging 8",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-application-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - application 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-audit-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - audit 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-infrastructure-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - infrastructure 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterlogforwarder-editor-role rules: 1 - apiGroups: 2 - observability.openshift.io 3 resources: 4 - clusterlogforwarders 5 verbs: 6 - create 7 - delete 8 - get 9 - list 10 - patch 11 - update 12 - watch 13",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector annotations: observability.openshift.io/log-level: debug",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: annotations: observability.openshift.io/tech-preview-otlp-output: \"enabled\" 1 name: clf-otlp spec: serviceAccount: name: <service_account_name> outputs: - name: otlp type: otlp otlp: tuning: compression: gzip deliveryMode: AtLeastOnce maxRetryDuration: 20 maxWrite: 10M minRetryDuration: 5 url: <otlp_url> 2 pipelines: - inputRefs: - application - infrastructure - audit name: otlp-logs outputRefs: - otlp",
"java.lang.NullPointerException: Cannot invoke \"String.toString()\" because \"<param1>\" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> filters: - name: <name> type: detectMultilineException pipelines: - inputRefs: - <input-name> name: <pipeline-name> filterRefs: - <filter-name> outputRefs: - <output-name>",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: managementState: Managed outputs: - name: <output_name> type: http http: headers: 1 h1: v1 h2: v2 authentication: username: key: username secretName: <http_auth_secret> password: key: password secretName: <http_auth_secret> timeout: 300 proxyURL: <proxy_url> 2 url: <url> 3 tls: insecureSkipVerify: 4 ca: key: <ca_certificate> secretName: <secret_name> 5 pipelines: - inputRefs: - application name: pipe1 outputRefs: - <output_name> 6 serviceAccount: name: <service_account_name> 7",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector spec: managementState: Managed outputs: - name: rsyslog-east 1 syslog: appName: <app_name> 2 enrichment: KubernetesMinimal facility: <facility_value> 3 msgId: <message_ID> 4 payloadKey: <record_field> 5 procId: <process_ID> 6 rfc: <RFC3164_or_RFC5424> 7 severity: informational 8 tuning: deliveryMode: <AtLeastOnce_or_AtMostOnce> 9 url: <url> 10 tls: 11 ca: key: ca-bundle.crt secretName: syslog-secret type: syslog pipelines: - inputRefs: 12 - application name: syslog-east 13 outputRefs: - rsyslog-east serviceAccount: 14 name: logcollector",
"oc create -f <filename>.yaml",
"spec: outputs: - name: syslogout syslog: enrichment: KubernetesMinimal: true facility: user payloadKey: message rfc: RFC3164 severity: debug tag: mytag type: syslog url: tls://syslog-receiver.example.com:6514 pipelines: - inputRefs: - application name: test-app outputRefs: - syslogout",
"2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: {...}",
"2025-03-03T11:48:01+00:00 example-worker-x syslogsyslogserverd846bb9b: namespace_name=cakephp-project container_name=mysql pod_name=mysql-1-wr96h,message: {...}",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels.\"foo-bar/baz\" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: \"my-pod\" 6 pipelines: - name: <pipeline_name> 7 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .message notMatches: \"(?i)critical|error\" - field: .level matches: \"info|warning\"",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .kubernetes.namespace_name matches: \"^open\" - test: - field: .log_type matches: \"application\" - field: .kubernetes.pod_name notMatches: \"my-pod\"",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - \"RequestReceived\" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: \"\" resources: [\"pods\"] # Log \"pods/log\", \"pods/status\" at Metadata level - level: Metadata resources: - group: \"\" resources: [\"pods/log\", \"pods/status\"] # Don't log requests to a configmap called \"controller-leader\" - level: None resources: - group: \"\" resources: [\"configmaps\"] resourceNames: [\"controller-leader\"] # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services - level: None users: [\"system:kube-proxy\"] verbs: [\"watch\"] resources: - group: \"\" # core API group resources: [\"endpoints\", \"services\"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: [\"system:authenticated\"] nonResourceURLs: - \"/api*\" # Wildcard matching. - \"/version\" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: \"\" # core API group resources: [\"configmaps\"] # This rule only applies to resources in the \"kube-system\" namespace. # The empty string \"\" can be used to select non-namespaced resources. namespaces: [\"kube-system\"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: \"\" # core API group resources: [\"secrets\", \"configmaps\"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: \"\" # core API group - group: \"extensions\" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: [\"prod\", \"qa\"] 3 - key: zone operator: NotIn values: [\"east\", \"west\"] matchLabels: 4 app: one name: app1 type: application",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,.\"@timestamp\"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs1 type: infrastructure infrastructure: sources: 1 - node - name: mylogs2 type: audit audit: sources: 2 - kubeAPI - openshiftAPI - ovn",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: includes: - namespace: \"my-project\" 1 container: \"my-container\" 2 excludes: - container: \"other-container*\" 3 namespace: \"other-namespace\" 4 type: application",
"oc apply -f <filename>.yaml",
"oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>",
"oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"infrastructure\" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"} |= \"error\" [1m])) by (job) / sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"application\" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name=\"app-ns\", kubernetes_pod_name=~\"podName.*\"} |= \"error\" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6",
"oc apply -f <filename>.yaml",
"oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{\"spec\": {\"hashRing\":{\"memberlist\":{\"instanceAddrType\":\"podIP\"},\"type\":\"memberlist\"}}}'",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: hashRing: type: memberlist memberlist: instanceAddrType: podIP",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~\"test.+\"}' 3 - days: 1 priority: 1 selector: '{log_type=\"infrastructure\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~\"test.+\"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~\"openshift-cluster.+\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc apply -f <filename>.yaml",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc explain lokistack.spec.template",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.",
"oc explain lokistack.spec.template.compactor",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN>",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: ingester: podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4",
"oc get pods --field-selector status.phase==Pending -n openshift-logging",
"NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m",
"oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == \"Pending\") | .metadata.name' -r",
"storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1",
"oc delete pvc <pvc_name> -n openshift-logging",
"oc delete pod <pod_name> -n openshift-logging",
"oc patch pvc <pvc_name> -p '{\"metadata\":{\"finalizers\":null}}' -n openshift-logging",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2",
"spec: storage: schemas: - version: v13 effectiveDate: 2024-10-25",
"spec: limits: global: otlp: {} 1 tenants: application: 2 otlp: {}",
"spec: limits: global: otlp: streamLabels: resourceAttributes: - name: \"k8s.namespace.name\" - name: \"k8s.pod.name\" - name: \"k8s.container.name\"",
"spec: limits: global: otlp: streamLabels: drop: resourceAttributes: - name: \"process.command_line\" - name: \"k8s\\\\.pod\\\\.labels\\\\..+\" regex: true scopeAttributes: - name: \"service.name\" logAttributes: - name: \"http.route\"",
"spec: tenants: mode: openshift-logging openshift: otlp: disableRecommendedAttributes: true 1",
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.",
"oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')",
"tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc create sa collector -n openshift-logging",
"oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector",
"oc project openshift-logging",
"oc adm policy add-cluster-role-to-user collect-application-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-audit-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector",
"apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: authentication: token: from: serviceAccount target: name: logging-loki namespace: openshift-logging tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc create sa collector -n openshift-logging",
"oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector",
"oc project openshift-logging",
"oc adm policy add-cluster-role-to-user collect-application-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-audit-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector",
"apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging annotations: observability.openshift.io/tech-preview-otlp-output: \"enabled\" 1 spec: serviceAccount: name: collector outputs: - name: loki-otlp type: lokiStack 2 lokiStack: target: name: logging-loki namespace: openshift-logging dataModel: Otel 3 authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: my-pipeline inputRefs: - application - infrastructure outputRefs: - loki-otlp",
"apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: upgradeStrategy: Default",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable-6.<y> 2 installPlanApproval: Automatic 3 name: loki-operator source: redhat-operators 4 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 1 namespace: openshift-logging stringData: 2 access_key_id: <access_key_id> access_key_secret: <access_secret> bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1",
"oc apply -f <filename>.yaml",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: \"<yyyy>-<mm>-<dd>\" 4 secret: name: logging-loki-s3 5 type: s3 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8",
"oc apply -f <filename>.yaml",
"oc get pods -n openshift-logging",
"oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE logging-loki-compactor-0 1/1 Running 0 42m logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m logging-loki-index-gateway-0 1/1 Running 0 42m logging-loki-ingester-0 1/1 Running 0 42m logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: upgradeStrategy: Default",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable-6.<y> 2 installPlanApproval: Automatic 3 name: cluster-logging source: redhat-operators 4 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"oc create sa logging-collector -n openshift-logging",
"oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logging-collector -n openshift-logging oc adm policy add-cluster-role-to-user collect-application-logs -z logging-collector -n openshift-logging oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logging-collector -n openshift-logging",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging 1 spec: serviceAccount: name: logging-collector 2 outputs: - name: lokistack-out type: lokiStack 3 lokiStack: target: 4 name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: infra-app-logs inputRefs: 5 - application - infrastructure outputRefs: - lokistack-out",
"oc apply -f <filename>.yaml",
"oc get pods -n openshift-logging",
"oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m instance-222js 2/2 Running 0 18m instance-g9ddv 2/2 Running 0 18m instance-hfqq8 2/2 Running 0 18m instance-sphwg 2/2 Running 0 18m instance-vv7zn 2/2 Running 0 18m instance-wk5zz 2/2 Running 0 18m logging-loki-compactor-0 1/1 Running 0 42m logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m logging-loki-index-gateway-0 1/1 Running 0 42m logging-loki-ingester-0 1/1 Running 0 42m logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m",
"apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 labels: openshift.io/cluster-monitoring: \"true\" 2",
"apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 1 namespace: openshift-logging 2 stringData: 3 access_key_id: <access_key_id> access_key_secret: <access_key> bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: \"<yyyy>-<mm>-<dd>\" secret: name: logging-loki-s3 4 type: s3 5 storageClassName: <storage_class_name> 6 tenants: mode: openshift-logging 7",
"apiVersion: v1 kind: ServiceAccount metadata: name: logging-collector 1 namespace: openshift-logging 2",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logging-collector:write-logs roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: logging-collector-logs-writer 1 subjects: - kind: ServiceAccount name: logging-collector namespace: openshift-logging --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logging-collector:collect-application roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: collect-application-logs 2 subjects: - kind: ServiceAccount name: logging-collector namespace: openshift-logging --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logging-collector:collect-infrastructure roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: collect-infrastructure-logs 3 subjects: - kind: ServiceAccount name: logging-collector namespace: openshift-logging",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging 1 spec: serviceAccount: name: logging-collector 2 outputs: - name: lokistack-out type: lokiStack 3 lokiStack: target: 4 name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: infra-app-logs inputRefs: 5 - application - infrastructure outputRefs: - lokistack-out",
"oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: manager-rolebinding roleRef: 1 apiGroup: rbac.authorization.k8s.io 2 kind: ClusterRole 3 name: cluster-logging-operator 4 subjects: 5 - kind: ServiceAccount 6 name: cluster-logging-operator 7 namespace: openshift-logging 8",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-application-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - application 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-audit-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - audit 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-infrastructure-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - infrastructure 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterlogforwarder-editor-role rules: 1 - apiGroups: 2 - observability.openshift.io 3 resources: 4 - clusterlogforwarders 5 verbs: 6 - create 7 - delete 8 - get 9 - list 10 - patch 11 - update 12 - watch 13",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector annotations: observability.openshift.io/log-level: debug",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: annotations: observability.openshift.io/tech-preview-otlp-output: \"enabled\" 1 name: clf-otlp spec: serviceAccount: name: <service_account_name> outputs: - name: otlp type: otlp otlp: tuning: compression: gzip deliveryMode: AtLeastOnce maxRetryDuration: 20 maxWrite: 10M minRetryDuration: 5 url: <otlp_url> 2 pipelines: - inputRefs: - application - infrastructure - audit name: otlp-logs outputRefs: - otlp",
"java.lang.NullPointerException: Cannot invoke \"String.toString()\" because \"<param1>\" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> filters: - name: <name> type: detectMultilineException pipelines: - inputRefs: - <input-name> name: <pipeline-name> filterRefs: - <filter-name> outputRefs: - <output-name>",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels.\"foo-bar/baz\" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: \"my-pod\" 6 pipelines: - name: <pipeline_name> 7 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .message notMatches: \"(?i)critical|error\" - field: .level matches: \"info|warning\"",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .kubernetes.namespace_name matches: \"^open\" - test: - field: .log_type matches: \"application\" - field: .kubernetes.pod_name notMatches: \"my-pod\"",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - \"RequestReceived\" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: \"\" resources: [\"pods\"] # Log \"pods/log\", \"pods/status\" at Metadata level - level: Metadata resources: - group: \"\" resources: [\"pods/log\", \"pods/status\"] # Don't log requests to a configmap called \"controller-leader\" - level: None resources: - group: \"\" resources: [\"configmaps\"] resourceNames: [\"controller-leader\"] # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services - level: None users: [\"system:kube-proxy\"] verbs: [\"watch\"] resources: - group: \"\" # core API group resources: [\"endpoints\", \"services\"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: [\"system:authenticated\"] nonResourceURLs: - \"/api*\" # Wildcard matching. - \"/version\" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: \"\" # core API group resources: [\"configmaps\"] # This rule only applies to resources in the \"kube-system\" namespace. # The empty string \"\" can be used to select non-namespaced resources. namespaces: [\"kube-system\"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: \"\" # core API group resources: [\"secrets\", \"configmaps\"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: \"\" # core API group - group: \"extensions\" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: [\"prod\", \"qa\"] 3 - key: zone operator: NotIn values: [\"east\", \"west\"] matchLabels: 4 app: one name: app1 type: application",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,.\"@timestamp\"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs1 type: infrastructure infrastructure: sources: 1 - node - name: mylogs2 type: audit audit: sources: 2 - kubeAPI - openshiftAPI - ovn",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: includes: - namespace: \"my-project\" 1 container: \"my-container\" 2 excludes: - container: \"other-container*\" 3 namespace: \"other-namespace\" 4 type: application",
"oc apply -f <filename>.yaml",
"oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>",
"oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"infrastructure\" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"} |= \"error\" [1m])) by (job) / sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"application\" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name=\"app-ns\", kubernetes_pod_name=~\"podName.*\"} |= \"error\" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6",
"oc apply -f <filename>.yaml",
"oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{\"spec\": {\"hashRing\":{\"memberlist\":{\"instanceAddrType\":\"podIP\"},\"type\":\"memberlist\"}}}'",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: hashRing: type: memberlist memberlist: instanceAddrType: podIP",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~\"test.+\"}' 3 - days: 1 priority: 1 selector: '{log_type=\"infrastructure\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~\"test.+\"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~\"openshift-cluster.+\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc apply -f <filename>.yaml",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc explain lokistack.spec.template",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.",
"oc explain lokistack.spec.template.compactor",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN>",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: ingester: podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4",
"oc get pods --field-selector status.phase==Pending -n openshift-logging",
"NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m",
"oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == \"Pending\") | .metadata.name' -r",
"storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1",
"oc delete pvc <pvc_name> -n openshift-logging",
"oc delete pod <pod_name> -n openshift-logging",
"oc patch pvc <pvc_name> -p '{\"metadata\":{\"finalizers\":null}}' -n openshift-logging",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2",
"spec: storage: schemas: - version: v13 effectiveDate: 2024-10-25",
"spec: limits: global: otlp: {} 1 tenants: application: otlp: {} 2",
"spec: limits: global: otlp: streamLabels: resourceAttributes: - name: \"k8s.namespace.name\" - name: \"k8s.pod.name\" - name: \"k8s.container.name\"",
"spec: limits: global: otlp: streamLabels: structuredMetadata: resourceAttributes: - name: \"process.command_line\" - name: \"k8s\\\\.pod\\\\.labels\\\\..+\" regex: true scopeAttributes: - name: \"service.name\" logAttributes: - name: \"http.route\"",
"spec: tenants: mode: openshift-logging openshift: otlp: disableRecommendedAttributes: true 1",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <name> spec: outputs: - name: <output_name> type: <output_type> <output_type>: tuning: deliveryMode: AtMostOnce",
"oc create secret generic logging-loki-s3 --from-literal=bucketnames=\"<bucket_name>\" --from-literal=endpoint=\"<aws_bucket_endpoint>\" --from-literal=access_key_id=\"<aws_access_key_id>\" --from-literal=access_key_secret=\"<aws_access_key_secret>\" --from-literal=region=\"<aws_region_of_your_bucket>\" -n openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2022-06-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc create sa collector -n openshift-logging",
"oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector -n openshift-logging",
"apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki",
"oc adm policy add-cluster-role-to-user collect-application-logs -z collector -n openshift-logging oc adm policy add-cluster-role-to-user collect-audit-logs -z collector -n openshift-logging oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector -n openshift-logging",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: target: name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack",
"apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: upgradeStrategy: Default",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable-6.<y> 2 installPlanApproval: Automatic 3 name: loki-operator source: redhat-operators 4 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 1 namespace: openshift-logging stringData: 2 access_key_id: <access_key_id> access_key_secret: <access_secret> bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1",
"oc apply -f <filename>.yaml",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: \"<yyyy>-<mm>-<dd>\" 4 secret: name: logging-loki-s3 5 type: s3 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8",
"oc apply -f <filename>.yaml",
"oc get pods -n openshift-logging",
"oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE logging-loki-compactor-0 1/1 Running 0 42m logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m logging-loki-index-gateway-0 1/1 Running 0 42m logging-loki-ingester-0 1/1 Running 0 42m logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: upgradeStrategy: Default",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable-6.<y> 2 installPlanApproval: Automatic 3 name: cluster-logging source: redhat-operators 4 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"oc create sa logging-collector -n openshift-logging",
"oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logging-collector -n openshift-logging oc adm policy add-cluster-role-to-user collect-application-logs -z logging-collector -n openshift-logging oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logging-collector -n openshift-logging",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging 1 spec: serviceAccount: name: logging-collector 2 outputs: - name: lokistack-out type: lokiStack 3 lokiStack: target: 4 name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: infra-app-logs inputRefs: 5 - application - infrastructure outputRefs: - lokistack-out",
"oc apply -f <filename>.yaml",
"oc get pods -n openshift-logging",
"oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m instance-222js 2/2 Running 0 18m instance-g9ddv 2/2 Running 0 18m instance-hfqq8 2/2 Running 0 18m instance-sphwg 2/2 Running 0 18m instance-vv7zn 2/2 Running 0 18m instance-wk5zz 2/2 Running 0 18m logging-loki-compactor-0 1/1 Running 0 42m logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m logging-loki-index-gateway-0 1/1 Running 0 42m logging-loki-ingester-0 1/1 Running 0 42m logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m",
"apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 labels: openshift.io/cluster-monitoring: \"true\" 2",
"apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 1 namespace: openshift-logging 2 stringData: 3 access_key_id: <access_key_id> access_key_secret: <access_key> bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: \"<yyyy>-<mm>-<dd>\" secret: name: logging-loki-s3 4 type: s3 5 storageClassName: <storage_class_name> 6 tenants: mode: openshift-logging 7",
"apiVersion: v1 kind: ServiceAccount metadata: name: logging-collector 1 namespace: openshift-logging 2",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logging-collector:write-logs roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: logging-collector-logs-writer 1 subjects: - kind: ServiceAccount name: logging-collector namespace: openshift-logging --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logging-collector:collect-application roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: collect-application-logs 2 subjects: - kind: ServiceAccount name: logging-collector namespace: openshift-logging --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logging-collector:collect-infrastructure roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: collect-infrastructure-logs 3 subjects: - kind: ServiceAccount name: logging-collector namespace: openshift-logging",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging 1 spec: serviceAccount: name: logging-collector 2 outputs: - name: lokistack-out type: lokiStack 3 lokiStack: target: 4 name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: infra-app-logs inputRefs: 5 - application - infrastructure outputRefs: - lokistack-out",
"oc explain clusterlogforwarders.observability.openshift.io.spec.outputs",
"oc explain lokistacks.loki.grafana.com oc explain lokistacks.loki.grafana.com.spec oc explain lokistacks.loki.grafana.com.spec.storage oc explain lokistacks.loki.grafana.com.spec.storage.schemas",
"oc explain lokistacks.loki.grafana.com.spec.size",
"oc explain lokistacks.spec.template.distributor.replicas",
"GROUP: loki.grafana.com KIND: LokiStack VERSION: v1 FIELD: replicas <integer> DESCRIPTION: Replicas defines the number of replica pods of the component.",
"oc -n openshift-logging patch clusterlogging/instance -p '{\"spec\":{\"managementState\": \"Unmanaged\"}}' --type=merge",
"oc -n openshift-logging patch elasticsearch/elasticsearch -p '{\"metadata\":{\"ownerReferences\": []}}' --type=merge",
"oc -n openshift-logging patch kibana/kibana -p '{\"metadata\":{\"ownerReferences\": []}}' --type=merge",
"oc -n openshift-logging patch clusterlogging/instance -p '{\"spec\":{\"managementState\": \"Managed\"}}' --type=merge",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" spec: managementState: \"Managed\" collection: resources: limits: {} requests: {} nodeSelector: {} tolerations: {}",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder spec: managementState: Managed collector: resources: limits: {} requests: {} nodeSelector: {} tolerations: {}",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: application-logs type: application application: namespaces: - foo - bar includes: - namespace: my-important container: main excludes: - container: too-verbose",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: application-logs type: application application: includes: - namespace: foo - namespace: bar - namespace: my-important container: main excludes: - container: too-verbose",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: an-http receiver: http: port: 8443 format: kubeAPIAudit - name: a-syslog receiver: type: syslog syslog: port: 9442",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: an-http type: receiver receiver: type: http port: 8443 http: format: kubeAPIAudit - name: a-syslog type: receiver receiver: type: syslog port: 9442",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: logStore: type: elasticsearch",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: serviceAccount: name: <service_account_name> managementState: Managed outputs: - name: audit-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: audit-write tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector - name: app-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: app-write tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector - name: infra-elasticsearch type: elasticsearch elasticsearch: url: https://elasticsearch:9200 version: 6 index: infra-write tls: ca: key: ca-bundle.crt secretName: collector certificate: key: tls.crt secretName: collector key: key: tls.key secretName: collector pipelines: - name: app inputRefs: - application outputRefs: - app-elasticsearch - name: audit inputRefs: - audit outputRefs: - audit-elasticsearch - name: infra inputRefs: - infrastructure outputRefs: - infra-elasticsearch",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: logStore: type: lokistack lokistack: name: lokistack-dev",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: default-lokistack type: lokiStack lokiStack: target: name: lokistack-dev namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - outputRefs: - default-lokistack - inputRefs: - application - infrastructure",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder spec: pipelines: - name: application-logs parse: json labels: foo: bar detectMultilineErrors: true",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: filters: - name: detectexception type: detectMultilineException - name: parse-json type: parse - name: labels type: openshiftLabels openshiftLabels: foo: bar pipelines: - name: application-logs filterRefs: - detectexception - labels - parse-json",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder status: conditions: - lastTransitionTime: \"2024-09-13T03:28:44Z\" message: 'permitted to collect log types: [application]' reason: ClusterRolesExist status: \"True\" type: observability.openshift.io/Authorized - lastTransitionTime: \"2024-09-13T12:16:45Z\" message: \"\" reason: ValidationSuccess status: \"True\" type: observability.openshift.io/Valid - lastTransitionTime: \"2024-09-13T12:16:45Z\" message: \"\" reason: ReconciliationComplete status: \"True\" type: Ready filterConditions: - lastTransitionTime: \"2024-09-13T13:02:59Z\" message: filter \"detectexception\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidFilter-detectexception - lastTransitionTime: \"2024-09-13T13:02:59Z\" message: filter \"parse-json\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidFilter-parse-json inputConditions: - lastTransitionTime: \"2024-09-13T12:23:03Z\" message: input \"application1\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidInput-application1 outputConditions: - lastTransitionTime: \"2024-09-13T13:02:59Z\" message: output \"default-lokistack-application1\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidOutput-default-lokistack-application1 pipelineConditions: - lastTransitionTime: \"2024-09-13T03:28:44Z\" message: pipeline \"default-before\" is valid reason: ValidationSuccess status: \"True\" type: observability.openshift.io/ValidPipeline-default-before",
"oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: manager-rolebinding roleRef: 1 apiGroup: rbac.authorization.k8s.io 2 kind: ClusterRole 3 name: cluster-logging-operator 4 subjects: 5 - kind: ServiceAccount 6 name: cluster-logging-operator 7 namespace: openshift-logging 8",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-application-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - application 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-audit-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - audit 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-infrastructure-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - infrastructure 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterlogforwarder-editor-role rules: 1 - apiGroups: 2 - observability.openshift.io 3 resources: 4 - clusterlogforwarders 5 verbs: 6 - create 7 - delete 8 - get 9 - list 10 - patch 11 - update 12 - watch 13",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector annotations: observability.openshift.io/log-level: debug",
"java.lang.NullPointerException: Cannot invoke \"String.toString()\" because \"<param1>\" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> filters: - name: <name> type: detectMultilineException pipelines: - inputRefs: - <input-name> name: <pipeline-name> filterRefs: - <filter-name> outputRefs: - <output-name>",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels.\"foo-bar/baz\" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: \"my-pod\" 6 pipelines: - name: <pipeline_name> 7 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .message notMatches: \"(?i)critical|error\" - field: .level matches: \"info|warning\"",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .kubernetes.namespace_name matches: \"^open\" - test: - field: .log_type matches: \"application\" - field: .kubernetes.pod_name notMatches: \"my-pod\"",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - \"RequestReceived\" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: \"\" resources: [\"pods\"] # Log \"pods/log\", \"pods/status\" at Metadata level - level: Metadata resources: - group: \"\" resources: [\"pods/log\", \"pods/status\"] # Don't log requests to a configmap called \"controller-leader\" - level: None resources: - group: \"\" resources: [\"configmaps\"] resourceNames: [\"controller-leader\"] # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services - level: None users: [\"system:kube-proxy\"] verbs: [\"watch\"] resources: - group: \"\" # core API group resources: [\"endpoints\", \"services\"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: [\"system:authenticated\"] nonResourceURLs: - \"/api*\" # Wildcard matching. - \"/version\" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: \"\" # core API group resources: [\"configmaps\"] # This rule only applies to resources in the \"kube-system\" namespace. # The empty string \"\" can be used to select non-namespaced resources. namespaces: [\"kube-system\"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: \"\" # core API group resources: [\"secrets\", \"configmaps\"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: \"\" # core API group - group: \"extensions\" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: [\"prod\", \"qa\"] 3 - key: zone operator: NotIn values: [\"east\", \"west\"] matchLabels: 4 app: one name: app1 type: application",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,.\"@timestamp\"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs1 type: infrastructure infrastructure: sources: 1 - node - name: mylogs2 type: audit audit: sources: 2 - kubeAPI - openshiftAPI - ovn",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: includes: - namespace: \"my-project\" 1 container: \"my-container\" 2 excludes: - container: \"other-container*\" 3 namespace: \"other-namespace\" 4 type: application",
"oc apply -f <filename>.yaml",
"oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>",
"oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"infrastructure\" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"} |= \"error\" [1m])) by (job) / sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"application\" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name=\"app-ns\", kubernetes_pod_name=~\"podName.*\"} |= \"error\" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6",
"oc apply -f <filename>.yaml",
"oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{\"spec\": {\"hashRing\":{\"memberlist\":{\"instanceAddrType\":\"podIP\"},\"type\":\"memberlist\"}}}'",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: hashRing: type: memberlist memberlist: instanceAddrType: podIP",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~\"test.+\"}' 3 - days: 1 priority: 1 selector: '{log_type=\"infrastructure\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~\"test.+\"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~\"openshift-cluster.+\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc apply -f <filename>.yaml",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc explain lokistack.spec.template",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.",
"oc explain lokistack.spec.template.compactor",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN>",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: ingester: podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4",
"oc get pods --field-selector status.phase==Pending -n openshift-logging",
"NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m",
"oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == \"Pending\") | .metadata.name' -r",
"storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1",
"oc delete pvc <pvc_name> -n openshift-logging",
"oc delete pod <pod_name> -n openshift-logging",
"oc patch pvc <pvc_name> -p '{\"metadata\":{\"finalizers\":null}}' -n openshift-logging",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2",
"apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable 2 name: loki-operator source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-logging: \"true\" openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: \"<yyyy>-<mm>-<dd>\" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed",
"oc apply -f <filename>.yaml",
"oc get pods -n openshift-logging",
"oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m collector-222js 2/2 Running 0 18m collector-g9ddv 2/2 Running 0 18m collector-hfqq8 2/2 Running 0 18m collector-sphwg 2/2 Running 0 18m collector-vv7zn 2/2 Running 0 18m collector-wk5zz 2/2 Running 0 18m logging-view-plugin-6f76fbb78f-n2n4n 1/1 Running 0 18m lokistack-sample-compactor-0 1/1 Running 0 42m lokistack-sample-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m lokistack-sample-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m lokistack-sample-gateway-5f6c75f879-xhq98 2/2 Running 0 42m lokistack-sample-index-gateway-0 1/1 Running 0 42m lokistack-sample-ingester-0 1/1 Running 0 42m lokistack-sample-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m lokistack-sample-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: \"<yyyy>-<mm>-<dd>\" secret: name: logging-loki-s3 4 type: s3 5 credentialMode: 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance 1 namespace: openshift-logging 2 spec: collection: type: vector logStore: lokistack: name: logging-loki retentionPolicy: application: maxAge: 7d audit: maxAge: 7d infra: maxAge: 7d type: lokistack visualization: type: ocp-console ocpConsole: logsLimit: 15 managementState: Managed"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/logging/cluster-logging-curator |
Chapter 1. Support policy for Red Hat build of OpenJDK | Chapter 1. Support policy for Red Hat build of OpenJDK Red Hat will support select major versions of Red Hat build of OpenJDK in its products. For consistency, these versions remain similar to Oracle JDK versions that are designated as long-term support (LTS). A major version of Red Hat build of OpenJDK will be supported for a minimum of six years from the time that version is first introduced. For more information, see the OpenJDK Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Red Hat build of OpenJDK is not supporting RHEL 6 as a supported configuration. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/getting_started_with_eclipse_temurin/rn-openjdk-support-policy |
Chapter 6. Guest Virtual Machine Installation Overview | Chapter 6. Guest Virtual Machine Installation Overview After you have installed the virtualization packages on the host system you can create guest operating systems. This chapter describes the general processes for installing guest operating systems on virtual machines. You can create guest virtual machines using the New button in virt-manager or use the command line interface virt-install . Both methods are covered by this chapter. Detailed installation instructions are available in the following chapters for specific versions of Red Hat Enterprise Linux and Microsoft Windows. 6.1. Guest Virtual Machine Prerequisites and Considerations Various factors should be considered before creating any guest virtual machines. Not only should the role of a virtual machine be considered before deployment, but regular ongoing monitoring and assessment based on variable factors (load, amount of clients) should be performed. Some factors include: Performance Guest virtual machines should be deployed and configured based on their intended tasks. Some guest systems (for instance, guests running a database server) may require special performance considerations. Guests may require more assigned CPUs or memory based on their role and projected system load. Input/Output requirements and types of Input/Output Some guest virtual machines may have a particularly high I/O requirement or may require further considerations or projections based on the type of I/O (for instance, typical disk block size access, or the amount of clients). Storage Some guest virtual machines may require higher priority access to storage or faster disk types, or may require exclusive access to areas of storage. The amount of storage used by guests should also be regularly monitored and taken into account when deploying and maintaining storage. Networking and network infrastructure Depending upon your environment, some guest virtual machines could require faster network links than other guests. Bandwidth or latency are often factors when deploying and maintaining guests, especially as requirements or load changes. Request requirements SCSI requests can only be issued to guest virtual machines on virtio drives if the virtio drives are backed by whole disks, and the disk device parameter is set to lun , as shown in the following example: | [
"<devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='block' device='lun'>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/chap-virtualization_host_configuration_and_guest_installation_guide-guest_installation |
Chapter 12. Configuring logging for Kafka components | Chapter 12. Configuring logging for Kafka components Configure the logging levels of Kafka components directly in the configuration properties. You can also change the broker levels dynamically for Kafka brokers, Kafka Connect, and MirrorMaker 2. Increasing the log level detail, such as from INFO to DEBUG, can aid in troubleshooting a Kafka cluster. However, more verbose logs may also negatively impact performance and make it more difficult to diagnose issues. 12.1. Configuring Kafka logging properties Kafka components use the Log4j framework for error logging. By default, logging configuration is read from the classpath or config directory using the following properties files: log4j.properties for Kafka and ZooKeeper connect-log4j.properties for Kafka Connect and MirrorMaker 2 If they are not set explicitly, loggers inherit the log4j.rootLogger logging level configuration in each file. You can change the logging level in these files. You can also add and set logging levels for other loggers. You can change the location and name of logging properties file using the KAFKA_LOG4J_OPTS environment variable, which is used by the start script for the component. Passing the name and location of the logging properties file used by Kafka brokers export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:/my/path/to/log4j.properties"; \ ./bin/kafka-server-start.sh \ ./config/server.properties Passing the name and location of the logging properties file used by ZooKeeper export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:/my/path/to/log4j.properties"; \ ./bin/zookeeper-server-start.sh -daemon \ ./config/zookeeper.properties Passing the name and location of the logging properties file used by Kafka Connect export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:/my/path/to/connect-log4j.properties"; \ ./bin/connect-distributed.sh \ ./config/connect-distributed.properties Passing the name and location of the logging properties file used by MirrorMaker 2 export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:/my/path/to/connect-log4j.properties"; \ ./bin/connect-mirror-maker.sh \ ./config/connect-mirror-maker.properties 12.2. Dynamically change logging levels for Kafka broker loggers Kafka broker logging is provided by broker loggers in each broker. Dynamically change the logging level for broker loggers at runtime without having to restart the broker. You can also reset broker loggers dynamically to their default logging levels. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Kafka is running . Procedure List all the broker loggers for a broker by using the kafka-configs.sh tool: ./bin/kafka-configs.sh --bootstrap-server <broker_address> --describe --entity-type broker-loggers --entity-name <broker_id> For example, for broker 0 : ./bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type broker-loggers --entity-name 0 This returns the logging level for each logger: TRACE , DEBUG , INFO , WARN , ERROR , or FATAL . For example: #... kafka.controller.ControllerChannelManager=INFO sensitive=false synonyms={} kafka.log.TimeIndex=INFO sensitive=false synonyms={} Change the logging level for one or more broker loggers. Use the --alter and --add-config options and specify each logger and its level as a comma-separated list in double quotes. ./bin/kafka-configs.sh --bootstrap-server <broker_address> --alter --add-config " LOGGER-ONE=NEW-LEVEL , LOGGER-TWO=NEW-LEVEL " --entity-type broker-loggers --entity-name <broker_id> For example, for broker 0 : ./bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config "kafka.controller.ControllerChannelManager=WARN,kafka.log.TimeIndex=WARN" --entity-type broker-loggers --entity-name 0 If successful this returns: Completed updating config for broker: 0. Resetting a broker logger You can reset one or more broker loggers to their default logging levels by using the kafka-configs.sh tool. Use the --alter and --delete-config options and specify each broker logger as a comma-separated list in double quotes: ./bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --delete-config " LOGGER-ONE , LOGGER-TWO " --entity-type broker-loggers --entity-name <broker_id> Additional resources Updating Broker Configs in the Apache Kafka documentation 12.3. Dynamically change logging levels for Kafka Connect and MirrorMaker 2 Dynamically change logging levels for Kafka Connect workers or MirrorMaker 2 connectors at runtime without having to restart. Use the Kafka Connect API to change the log level temporarily for a worker or connector logger. The Kafka Connect API provides an admin/loggers endpoint to get or modify logging levels. When you change the log level using the API, the logger configuration in the connect-log4j.properties configuration file does not change. If required, you can permanently change the logging levels in the configuration file. Note You can only change the logging level of MirrorMaker 2 at runtime when in distributed or standalone mode. Dedicated MirrorMaker 2 clusters have no Kafka Connect REST API, so changing the logging level is not possible. The default listener for the Kafka Connect API is on port 8083, which is used in this procedure. You can change or add more listeners, and also enable TLS authentication, using admin.listeners configuration. Example listener configuration for the admin endpoint admin.listeners=https://localhost:8083 admin.listeners.https.ssl.truststore.location=/path/to/truststore.jks admin.listeners.https.ssl.truststore.password=123456 admin.listeners.https.ssl.keystore.location=/path/to/keystore.jks admin.listeners.https.ssl.keystore.password=123456 If you do not want the admin endpoint to be available, you can disable it in the configuration by specifying an empty string. Example listener configuration to disable the admin endpoint admin.listeners= Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Kafka is running . Kafka Connect or MirrorMaker 2 is running. Procedure Check the current logging level for the loggers configured in the connect-log4j.properties file: USD cat ./config/connect-log4j.properties # ... log4j.rootLogger=INFO, stdout, connectAppender # ... log4j.logger.org.reflections=ERROR Use a curl command to check the logging levels from the admin/loggers endpoint of the Kafka Connect API: curl -s http://localhost:8083/admin/loggers/ | jq { "org.reflections": { "level": "ERROR" }, "root": { "level": "INFO" } } jq prints the output in JSON format. The list shows standard org and root level loggers, plus any specific loggers with modified logging levels. If you configure TLS (Transport Layer Security) authentication for the admin.listeners configuration in Kafka Connect, then the address of the loggers endpoint is the value specified for admin.listeners with the protocol as https, such as https://localhost:8083 . You can also get the log level of a specific logger: curl -s http://localhost:8083/admin/loggers/org.apache.kafka.connect.mirror.MirrorCheckpointConnector | jq { "level": "INFO" } Use a PUT method to change the log level for a logger: curl -Ss -X PUT -H 'Content-Type: application/json' -d '{"level": "TRACE"}' http://localhost:8083/admin/loggers/root { # ... "org.reflections": { "level": "TRACE" }, "org.reflections.Reflections": { "level": "TRACE" }, "root": { "level": "TRACE" } } If you change the root logger, the logging level for loggers that used the root logging level by default are also changed. | [
"export KAFKA_LOG4J_OPTS=\"-Dlog4j.configuration=file:/my/path/to/log4j.properties\"; ./bin/kafka-server-start.sh ./config/server.properties",
"export KAFKA_LOG4J_OPTS=\"-Dlog4j.configuration=file:/my/path/to/log4j.properties\"; ./bin/zookeeper-server-start.sh -daemon ./config/zookeeper.properties",
"export KAFKA_LOG4J_OPTS=\"-Dlog4j.configuration=file:/my/path/to/connect-log4j.properties\"; ./bin/connect-distributed.sh ./config/connect-distributed.properties",
"export KAFKA_LOG4J_OPTS=\"-Dlog4j.configuration=file:/my/path/to/connect-log4j.properties\"; ./bin/connect-mirror-maker.sh ./config/connect-mirror-maker.properties",
"./bin/kafka-configs.sh --bootstrap-server <broker_address> --describe --entity-type broker-loggers --entity-name <broker_id>",
"./bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type broker-loggers --entity-name 0",
"# kafka.controller.ControllerChannelManager=INFO sensitive=false synonyms={} kafka.log.TimeIndex=INFO sensitive=false synonyms={}",
"./bin/kafka-configs.sh --bootstrap-server <broker_address> --alter --add-config \" LOGGER-ONE=NEW-LEVEL , LOGGER-TWO=NEW-LEVEL \" --entity-type broker-loggers --entity-name <broker_id>",
"./bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config \"kafka.controller.ControllerChannelManager=WARN,kafka.log.TimeIndex=WARN\" --entity-type broker-loggers --entity-name 0",
"Completed updating config for broker: 0.",
"./bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --delete-config \" LOGGER-ONE , LOGGER-TWO \" --entity-type broker-loggers --entity-name <broker_id>",
"admin.listeners=https://localhost:8083 admin.listeners.https.ssl.truststore.location=/path/to/truststore.jks admin.listeners.https.ssl.truststore.password=123456 admin.listeners.https.ssl.keystore.location=/path/to/keystore.jks admin.listeners.https.ssl.keystore.password=123456",
"admin.listeners=",
"cat ./config/connect-log4j.properties log4j.rootLogger=INFO, stdout, connectAppender log4j.logger.org.reflections=ERROR",
"curl -s http://localhost:8083/admin/loggers/ | jq { \"org.reflections\": { \"level\": \"ERROR\" }, \"root\": { \"level\": \"INFO\" } }",
"curl -s http://localhost:8083/admin/loggers/org.apache.kafka.connect.mirror.MirrorCheckpointConnector | jq { \"level\": \"INFO\" }",
"curl -Ss -X PUT -H 'Content-Type: application/json' -d '{\"level\": \"TRACE\"}' http://localhost:8083/admin/loggers/root { # \"org.reflections\": { \"level\": \"TRACE\" }, \"org.reflections.Reflections\": { \"level\": \"TRACE\" }, \"root\": { \"level\": \"TRACE\" } }"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_streams_for_apache_kafka_on_rhel_with_zookeeper/assembly-kafka-logging-str |
Integrating the Red Hat Hybrid Cloud Console with third-party applications | Integrating the Red Hat Hybrid Cloud Console with third-party applications Red Hat Hybrid Cloud Console 1-latest Configuring integrations between third-party tools and the Red Hat Hybrid Cloud Console Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/integrating_the_red_hat_hybrid_cloud_console_with_third-party_applications/index |
Chapter 6. Updating OpenShift Virtualization | Chapter 6. Updating OpenShift Virtualization Learn how Operator Lifecycle Manager (OLM) delivers z-stream and minor version updates for OpenShift Virtualization. Note The Node Maintenance Operator (NMO) is no longer shipped with OpenShift Virtualization. You can install the NMO from the OperatorHub in the OpenShift Container Platform web console, or by using the OpenShift CLI ( oc ). You must perform one of the following tasks before updating to OpenShift Virtualization 4.11 from OpenShift Virtualization 4.10.2 and later releases: Move all nodes out of maintenance mode. Install the standalone NMO and replace the nodemaintenances.nodemaintenance.kubevirt.io custom resource (CR) with a nodemaintenances.nodemaintenance.medik8s.io CR. 6.1. About updating OpenShift Virtualization Operator Lifecycle Manager (OLM) manages the lifecycle of the OpenShift Virtualization Operator. The Marketplace Operator, which is deployed during OpenShift Container Platform installation, makes external Operators available to your cluster. OLM provides z-stream and minor version updates for OpenShift Virtualization. Minor version updates become available when you update OpenShift Container Platform to the minor version. You cannot update OpenShift Virtualization to the minor version without first updating OpenShift Container Platform. OpenShift Virtualization subscriptions use a single update channel that is named stable . The stable channel ensures that your OpenShift Virtualization and OpenShift Container Platform versions are compatible. If your subscription's approval strategy is set to Automatic , the update process starts as soon as a new version of the Operator is available in the stable channel. It is highly recommended to use the Automatic approval strategy to maintain a supportable environment. Each minor version of OpenShift Virtualization is only supported if you run the corresponding OpenShift Container Platform version. For example, you must run OpenShift Virtualization 4.11 on OpenShift Container Platform 4.11. Though it is possible to select the Manual approval strategy, this is not recommended because it risks the supportability and functionality of your cluster. With the Manual approval strategy, you must manually approve every pending update. If OpenShift Container Platform and OpenShift Virtualization updates are out of sync, your cluster becomes unsupported. The amount of time an update takes to complete depends on your network connection. Most automatic updates complete within fifteen minutes. Updating OpenShift Virtualization does not interrupt network connections. Data volumes and their associated persistent volume claims are preserved during update. Important If you have virtual machines running that use hostpath provisioner storage, they cannot be live migrated and might block an OpenShift Container Platform cluster update. As a workaround, you can reconfigure the virtual machines so that they can be powered off automatically during a cluster update. Remove the evictionStrategy: LiveMigrate field and set the runStrategy field to Always . 6.2. Configuring automatic workload updates 6.2.1. About workload updates When you update OpenShift Virtualization, virtual machine workloads, including libvirt , virt-launcher , and qemu , update automatically if they support live migration. Note Each virtual machine has a virt-launcher pod that runs the virtual machine instance (VMI). The virt-launcher pod runs an instance of libvirt , which is used to manage the virtual machine (VM) process. You can configure how workloads are updated by editing the spec.workloadUpdateStrategy stanza of the HyperConverged custom resource (CR). There are two available workload update methods: LiveMigrate and Evict . Because the Evict method shuts down VMI pods, only the LiveMigrate update strategy is enabled by default. When LiveMigrate is the only update strategy enabled: VMIs that support live migration are migrated during the update process. The VM guest moves into a new pod with the updated components enabled. VMIs that do not support live migration are not disrupted or updated. If a VMI has the LiveMigrate eviction strategy but does not support live migration, it is not updated. If you enable both LiveMigrate and Evict : VMIs that support live migration use the LiveMigrate update strategy. VMIs that do not support live migration use the Evict update strategy. If a VMI is controlled by a VirtualMachine object that has a runStrategy value of always , a new VMI is created in a new pod with updated components. Migration attempts and timeouts When updating workloads, live migration fails if a pod is in the Pending state for the following periods: 5 minutes If the pod is pending because it is Unschedulable . 15 minutes If the pod is stuck in the pending state for any reason. When a VMI fails to migrate, the virt-controller tries to migrate it again. It repeats this process until all migratable VMIs are running on new virt-launcher pods. If a VMI is improperly configured, however, these attempts can repeat indefinitely. Note Each attempt corresponds to a migration object. Only the five most recent attempts are held in a buffer. This prevents migration objects from accumulating on the system while retaining information for debugging. 6.2.2. Configuring workload update methods You can configure workload update methods by editing the HyperConverged custom resource (CR). Prerequisites To use live migration as an update method, you must first enable live migration in the cluster. Note If a VirtualMachineInstance CR contains evictionStrategy: LiveMigrate and the virtual machine instance (VMI) does not support live migration, the VMI will not update. Procedure To open the HyperConverged CR in your default editor, run the following command: USD oc edit hco -n openshift-cnv kubevirt-hyperconverged Edit the workloadUpdateStrategy stanza of the HyperConverged CR. For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: workloadUpdateStrategy: workloadUpdateMethods: 1 - LiveMigrate 2 - Evict 3 batchEvictionSize: 10 4 batchEvictionInterval: "1m0s" 5 ... 1 The methods that can be used to perform automated workload updates. The available values are LiveMigrate and Evict . If you enable both options as shown in this example, updates use LiveMigrate for VMIs that support live migration and Evict for any VMIs that do not support live migration. To disable automatic workload updates, you can either remove the workloadUpdateStrategy stanza or set workloadUpdateMethods: [] to leave the array empty. 2 The least disruptive update method. VMIs that support live migration are updated by migrating the virtual machine (VM) guest into a new pod with the updated components enabled. If LiveMigrate is the only workload update method listed, VMIs that do not support live migration are not disrupted or updated. 3 A disruptive method that shuts down VMI pods during upgrade. Evict is the only update method available if live migration is not enabled in the cluster. If a VMI is controlled by a VirtualMachine object that has runStrategy: always configured, a new VMI is created in a new pod with updated components. 4 The number of VMIs that can be forced to be updated at a time by using the Evict method. This does not apply to the LiveMigrate method. 5 The interval to wait before evicting the batch of workloads. This does not apply to the LiveMigrate method. Note You can configure live migration limits and timeouts by editing the spec.liveMigrationConfig stanza of the HyperConverged CR. To apply your changes, save and exit the editor. 6.3. Approving pending Operator updates 6.3.1. Manually approving a pending Operator update If an installed Operator has the approval strategy in its subscription set to Manual , when new updates are released in its current update channel, the update must be manually approved before installation can begin. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators . Operators that have a pending update display a status with Upgrade available . Click the name of the Operator you want to update. Click the Subscription tab. Any update requiring approval are displayed to Upgrade Status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for update. When satisfied, click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . 6.4. Monitoring update status 6.4.1. Monitoring OpenShift Virtualization upgrade status To monitor the status of a OpenShift Virtualization Operator upgrade, watch the cluster service version (CSV) PHASE . You can also monitor the CSV conditions in the web console or by running the command provided here. Note The PHASE and conditions values are approximations that are based on available information. Prerequisites Log in to the cluster as a user with the cluster-admin role. Install the OpenShift CLI ( oc ). Procedure Run the following command: USD oc get csv -n openshift-cnv Review the output, checking the PHASE field. For example: Example output VERSION REPLACES PHASE 4.9.0 kubevirt-hyperconverged-operator.v4.8.2 Installing 4.9.0 kubevirt-hyperconverged-operator.v4.9.0 Replacing Optional: Monitor the aggregated status of all OpenShift Virtualization component conditions by running the following command: USD oc get hco -n openshift-cnv kubevirt-hyperconverged \ -o=jsonpath='{range .status.conditions[*]}{.type}{"\t"}{.status}{"\t"}{.message}{"\n"}{end}' A successful upgrade results in the following output: Example output ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully 6.4.2. Viewing outdated OpenShift Virtualization workloads You can view a list of outdated workloads by using the CLI. Note If there are outdated virtualization pods in your cluster, the OutdatedVirtualMachineInstanceWorkloads alert fires. Procedure To view a list of outdated virtual machine instances (VMIs), run the following command: USD oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces Note Configure workload updates to ensure that VMIs update automatically. 6.5. Additional resources What are Operators? Operator Lifecycle Manager concepts and resources Cluster service versions (CSVs) Virtual machine live migration Configuring virtual machine eviction strategy Configuring live migration limits and timeouts | [
"oc edit hco -n openshift-cnv kubevirt-hyperconverged",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: workloadUpdateStrategy: workloadUpdateMethods: 1 - LiveMigrate 2 - Evict 3 batchEvictionSize: 10 4 batchEvictionInterval: \"1m0s\" 5",
"oc get csv -n openshift-cnv",
"VERSION REPLACES PHASE 4.9.0 kubevirt-hyperconverged-operator.v4.8.2 Installing 4.9.0 kubevirt-hyperconverged-operator.v4.9.0 Replacing",
"oc get hco -n openshift-cnv kubevirt-hyperconverged -o=jsonpath='{range .status.conditions[*]}{.type}{\"\\t\"}{.status}{\"\\t\"}{.message}{\"\\n\"}{end}'",
"ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully",
"oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/virtualization/upgrading-virt |
Chapter 2. Marshalling custom objects with ProtoStream | Chapter 2. Marshalling custom objects with ProtoStream Marshalling is a process that converts Java objects into a binary format that can be transferred across the network or stored to disk. The reverse process, unmarshalling, transforms data from a binary format back into Java objects. Data Grid performs marshalling and unmarshalling to: Send data to other Data Grid nodes in a cluster. Store data in persistent cache stores. Transmit objects between clients and remote caches. Store objects in native memory outside the JVM heap. Store objects in JVM heap memory when the cache encoding is not application/x-java-object . When storing custom objects in Data Grid caches, you should use Protobuf-based marshalling with the ProtoStream marshaller. 2.1. ProtoStream marshalling Data Grid provides the ProtoStream API so you can marshall Java objects as Protocol Buffers (Protobuf). ProtoStream natively supports many different Java data types, which means you do not need to configure ProtoStream marshalling for those types. For custom or user types, you need to provide some information so that Data Grid can marshall those objects to and from your caches. SerializationContext A repository that contains Protobuf type definitions, loaded from Protobuf schemas ( .proto files), and the accompanying marshallers. SerializationContextInitializer An interface that initializes a SerializationContext . Additional resources org.infinispan.protostream.SerializationContext org.infinispan.protostream.SerializationContextInitializer 2.1.1. ProtoStream types Data Grid uses a ProtoStream library that can handle the following types for keys and values, as well as the unboxed equivalents in the case of primitive types: byte[] Byte String Integer Long Double Float Boolean Short Character java.util.Date java.time.Instant Additional type collections The ProtoStream library includes several adapter classes for common Java types, for example: java.math.BigDecimal java.math.BigInteger java.util.UUID java.util.BitSet Data Grid provides all adapter classes for some common JDK classes in the protostream-types artifact, which is included in the infinispan-core and infinispan-client-hotrod dependencies. You do not need any configuration to store adapter classes as keys or values. However, if you want to use adapter classes as marshallable fields in ProtoStream-annotated POJOs, you can do so in the following ways: Specify the CommonTypesSchema and CommonContainerTypesSchema classes with the dependsOn element of the ProtoSchema annotation. @ProtoSchema(dependsOn = {org.infinispan.protostream.types.java.CommonTypes, org.infinispan.protostream.types.java.CommonContainerTypes}, schemaFileName = "library.proto", schemaFilePath = "proto", schemaPackageName = "example") public interface LibraryInitalizer extends SerializationContextInitializer { } Specify the required adapter classes with the includeClasses element of the ProtoSchema annotation @ProtoSchema(includeClasses = { Author.class, Book.class, UUIDAdapter.class, java.math.BigInteger }, schemaFileName = "library.proto", schemaFilePath = "proto", schemaPackageName = "library") public interface LibraryInitalizer extends SerializationContextInitializer { } Additional resources Protocol Buffers Data Grid ProtoStream API 2.1.2. ProtoStream annotations The ProtoStream API includes annotations that you can add to Java applications to define Protobuf schemas, which provide a structured format for your objects. This topic provides additional details about ProtoStream annotations. You should refer to the documentation in the org.infinispan.protostream.annotations package for complete information. Proto @Proto defines a Protocol Buffers message without the requirement of having to annotate all fields with the @ProtoField annotation. Use this annotation to quickly generate messages from records or classes with public fields. Fields must be public and they will be assigned incremental numbers based on the declaration order. It is possible to override the automated defaults for a field by using the ProtoField annotation. Warning Use automatic Protobuf field numbering only for quick prototyping. For production environments you should follow the Protocol Buffers best practices in order to guarantee future/backwards compatibility with your schema. ProtoField @ProtoField defines a Protobuf message field. This annotation applies to fields as well as getter and setter methods. Unless you are using the @Proto annotation, a class must have at least one field annotated with @ProtoField before Data Grid can marshall it as Protobuf. Parameter Value Optional or required Description number Integer Required Tag numbers must be unique within the class. type Type Optional Declares the Protobuf type of the field. If you do not specify a type, it is inferred from the Java property. You can use the @ProtoField(type) element to change the Protobuf type, similarly to changing Java int to fixed32 . Any incompatible declarations for the Java property cause compiler errors. collectionImplementation Class Optional Indicates the actual collection type if the property type is an interface or abstract class. javaType Class Optional Indicates the actual Java type if the property type is an abstract class or interface. The value must be an instantiable Java type assignable to the property type. If you declare a type with the javaType parameter, then all user code must adhere to that type. The generated marshaller for the entry uses that implementation if it is unmarshalled. If the local client uses a different implementation than declared it causes ClassCastExceptions. name String Optional Specifies a name for the Protobuf schema. defaultValue String Optional Specifies the default value for fields if they are not available when reading from the cache. The value must follow the correct syntax for the Java field type. ProtoFactory @ProtoFactory marks a single constructor or static factory method for creating instances of the message class. You can use this annotation to support immutable message classes. All fields annotated with @ProtoField must be included in the parameters. Field names and parameters of the @ProtoFactory constructor or method must match the corresponding Protobuf message, however, the order is not important. If you do not add a @ProtoFactory annotated constructor to a class, that class must have a default no-argument constructor, otherwise errors occur during compilation. ProtoSchema @ProtoSchema generates an implementation of a class or interface that extends SerializationContextInitializer . If active, the ProtoStream processor generates the implementation at compile time in the same package with the Impl suffix or a name that you specify with the className parameter. The includeClasses or basePackages parameters reference classes that the ProtoStream processor should scan and include in the Protobuf schema and marshaller. If you do not set either of these parameters, the ProtoStream processor scans the entire source path, which can lead to unexpected results and is not recommended. You can also use the excludeClasses parameter with the basePackages parameter to exclude classes. The schemaFileName and schemaPackageName parameters register the generated Protobuf schema under this name. If you do not set these parameters, the annotated simple class name is used with the unnamed, or default, package. Schema names must end with the .proto file extension. You can also use the marshallersOnly to generate marshallers only and suppress the Protobuf schema generation. The ProtoStream process automatically generates META-INF/services service metadata files, which you can use so that Data Grid Server automatically picks up the JAR to register the Protobuf schema. The dependsOn parameter lists annotated classes that implement SerializedContextInitializer to execute first. If the class does not implement SerializedContextInitializer or is not annotated with ProtoSchema , a compile time error occurs. ProtoAdapter @ProtoAdapter is a marshalling adapter for a class or enum that you cannot annotate directly. If you use this annotation for: Classes, the annotated class must have one @ProtoFactory annotated factory method for the marshalled class and annotated accessor methods for each field. These methods can be instance or static methods and their first argument must be the marshalled class. Enums, an identically named enum value must exist in the target enum. ProtoName @ProtoName is an optional annotation that specifies the Protobuf message or enum type name. It can be used on classes, records and enums. ProtoEnumValue @ProtoEnumValue defines a Protobuf enum value. You can apply this annotation to members of a Java enum only. ProtoReserved and ProtoReservedStatements @ProtoReserved and @ProtoReservedStatements add reserved statements to generated messages or enum definitions to prevent future usage of numbers, ranges, and names. ProtoTypeId @ProtoTypeId optionally specifies a globally unique numeric type identifier for a Protobuf message or enum type. Note You should not add this annotation to classes because Data Grid uses it internally and identifiers can change without notice. ProtoUnknownFieldSet @ProtoUnknownFieldSet optionally indicates the field, or JavaBean property of type {@link org.infinispan.protostream.UnknownFieldSet} , which stores any unknown fields. Note Data Grid does not recommend using this annotation because it is no longer supported by Google and is likely to be removed in future. Other annotations Data Grid copies any other annotations on classes, fields, and methods as comments in the generated Protobuf schema. This includes indexing annotations such as @Indexed and @Basic . Additional resources org.infinispan.protostream.annotations Protocol Buffers Language Guide - Reserved Fields Protocol Buffers Language Guide - Reserved Values 2.2. Creating serialization context initializers A serialization context initializer lets you register the following with Data Grid: Protobuf schemas that describe user types. Marshallers that provide serialization and deserialization capabilities. From a high level, you should do the following to create a serialization context initializer: Add ProtoStream annotations to your Java classes. Use the ProtoStream processor that Data Grid provides to compile your SerializationContextInitializer implementation. Note The org.infinispan.protostream.MessageMarshaller interface is deprecated and planned for removal in a future version of ProtoStream. You should ignore any code examples or documentation that show how to use MessageMarshaller until it is completely removed. 2.2.1. Adding the ProtoStream processor Data Grid provides a ProtoStream processor artifact that processes Java annotations in your classes at compile time to generate Protobuf schemas, accompanying marshallers, and a concrete implementation of the SerializationContextInitializer interface. Procedure Add the protostream-processor to the annotation processors configuration of maven-compiler-plugin to your pom.xml . <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>...</version> <configuration> <annotationProcessorPaths> <annotationProcessorPath> <groupId>org.infinispan.protostream</groupId> <artifactId>protostream-processor</artifactId> <version>...</version> </annotationProcessorPath> </annotationProcessorPaths> </configuration> </plugin> </plugins> </build> 2.2.2. Adding ProtoStream annotations to Java classes Declare ProtoStream metadata by adding annotations to a Java class and its members. Data Grid then uses the ProtoStream processor to generate Protobuf schema and related marshallers from those annotations. Procedure Annotate the Java fields that you want to marshall with @ProtoField , either directly on the field or on the getter or setter method. Any non-annotated fields in your Java class are transient. For example, you have a Java class with 15 fields and annotate five of them. The resulting schema contains only those five fields and only those five fields are marshalled when storing a class instance in Data Grid. Use @ProtoFactory to annotate constructors for immutable objects. The annotated constructors must initialize all fields annotated with @ProtoField . Annotate members of any Java enum with @ProtoEnumValue . The following Author.java and Book.java examples show Java classes annotated with @ProtoField and @ProtoFactory : Author.java import org.infinispan.protostream.annotations.ProtoFactory; import org.infinispan.protostream.annotations.ProtoField; public class Author { @ProtoField(1) final String name; @ProtoField(2) final String surname; @ProtoFactory Author(String name, String surname) { this.name = name; this.surname = surname; } // public Getter methods omitted for brevity } Book.java import org.infinispan.protostream.annotations.ProtoFactory; import org.infinispan.protostream.annotations.ProtoField; public class Book { @ProtoField(number = 1) public final UUID id; @ProtoField(number = 2) final String title; @ProtoField(number = 3) final String description; @ProtoField(number = 4, defaultValue = "0") final int publicationYear; @ProtoField(number = 5, collectionImplementation = ArrayList.class) final List<Author> authors; @ProtoField(number = 6) public Language language; @ProtoFactory Book(UUID id, String title, String description, int publicationYear, List<Author> authors, Language language) { this.id = id; this.title = title; this.description = description; this.publicationYear = publicationYear; this.authors = authors; this.language = language; } // public Getter methods not included for brevity } The following Language.java example shows a Java enum annotated with @ProtoEnumValue along with the corresponding Protobuf schema: Language.java import org.infinispan.protostream.annotations.ProtoEnumValue; public enum Language { @ProtoEnumValue(number = 0, name = "EN") ENGLISH, @ProtoEnumValue(number = 1, name = "DE") GERMAN, @ProtoEnumValue(number = 2, name = "IT") ITALIAN, @ProtoEnumValue(number = 3, name = "ES") SPANISH, @ProtoEnumValue(number = 4, name = "FR") FRENCH; } Language.proto enum Language { EN = 0; DE = 1; IT = 2; ES = 3; FR = 4; } Additional resources org.infinispan.protostream.annotations.ProtoField org.infinispan.protostream.annotations.ProtoFactory 2.2.3. Creating ProtoStream adapter classes ProtoStream provides a @ProtoAdapter annotation that you can use to marshall external, third-party Java object classes that you cannot annotate directly. Procedure Create an Adapter class and add the @ProtoAdapter annotation, as in the following example: import java.util.UUID; import org.infinispan.protostream.annotations.ProtoAdapter; import org.infinispan.protostream.annotations.ProtoFactory; import org.infinispan.protostream.annotations.ProtoField; import org.infinispan.protostream.descriptors.Type; /** * Human readable UUID adapter for UUID marshalling */ @ProtoAdapter(UUID.class) public class UUIDAdapter { @ProtoFactory UUID create(String stringUUID) { return UUID.fromString(stringUUID); } @ProtoField(1) String getStringUUID(UUID uuid) { return uuid.toString(); } } Additional resources org.infinispan.protostream.annotations.ProtoAdapter 2.2.4. Generating serialization context initializers After you add the ProtoStream processor and annotate your Java classes, you can add the @ProtoSchema annotation to an interface so that Data Grid generates the Protobuf schema, accompanying marshallers, and a concrete implementation of the SerializationContextInitializer . Note By default, generated implementation names are the annotated class name with an "Impl" suffix. Procedure Define an interface that extends GeneratedSchema or its super interface, SerializationContextInitializer . Note The GeneratedSchema interface includes a method to access the Protobuf schema whereas the SerializationContextInitializer interface supports only registration methods. Annotate the interface with @ProtoSchema . Ensure that includeClasses parameter includes all classes for the generated SerializationContextInitializer implementation. Specify a name for the generated .proto schema with the schemaFileName parameter. Set a path under target/classes where schema files are generated with the schemaFilePath parameter. Specify a package name for the generated .proto schema with the schemaPackageName parameter. The following example shows a GeneratedSchema interface annotated with @ProtoSchema : @ProtoSchema( includeClasses = { Book.class, Author.class, UUIDAdapter.class, Language.class }, schemaFileName = "library.proto", schemaFilePath = "proto/", schemaPackageName = "book_sample") interface LibraryInitializer extends GeneratedSchema { } steps If you use embedded caches, Data Grid automatically registers your SerializationContextInitializer implementation. If you use remote caches, you must register your SerializationContextInitializer implementation with Data Grid Server. Additional resources org.infinispan.protostream.annotations.ProtoSchema 2.2.5. Protocol Buffers best practices The Protocol Buffers documentation provides a list of best practices on how to design messages and how to evolve the schema in order to maintain backwards compatibility. Data Grid can automatically perform compatibility checks when schemas are updated and reject updates when incompatibilities are detected. The types of checks can be configured via the schema-compatibility attribute of the global serialization configuration. The available levels are: UNRESTRICTED : no checks are performed LENIENT : a subset of the rules are enforced STRICT : all the rules are enforced (default) The following table shows the compatibility check rules enabled for each level Rule Description Level No Using Reserved Fields Compares the current and updated definitions and returns a list of warnings if any message's previously reserved fields or IDs are now being used as part of the same message. LENIENT , STRICT No Changing Field IDs Compares the current and updated definitions and returns a list of warnings if any field ID number has been changed. LENIENT , STRICT No Changing Field Types Compares the current and updated definitions and returns a list of warnings if any field type has been changed. LENIENT , STRICT No Removing Fields Without Reserve Compares the current and updated definitions and returns a list of warnings if any field has been removed without a corresponding reservation of that field name or ID. LENIENT , STRICT No Removing Reserved Fields Compares the current and updated definitions and returns a list of warnings if any reserved field has been removed. STRICT No Changing Field Names Compares the current and updated definitions and returns a list of warnings if any message's fields have been renamed. STRICT 2.2.6. Registering serialization context initializers For embedded caches, Data Grid automatically registers serialization contexts and marshallers in your annotated SerializationContextInitializer implementation using the java.util.ServiceLoader . If you prefer, you can disable automatic registration of SerializationContextInitializer implementations and then register them manually. Important If you manually register one SerializationContextInitializer implementation, it disables automatic registration. You must then manually register all other implementations. Procedure Set a value of false for the ProtoSchema.service annotation. @ProtoSchema( includeClasses = SomeClass.class, ... service = false ) Manually register SerializationContextInitializer implementations either programmatically or declaratively, as in the following examples: Declarative <serialization> <context-initializer class="org.infinispan.example.LibraryInitializerImpl"/> <context-initializer class="org.infinispan.example.another.SCIImpl"/> </serialization> Programmatic GlobalConfigurationBuilder builder = new GlobalConfigurationBuilder(); builder.serialization() .addContextInitializers(new LibraryInitializerImpl(), new SCIImpl()); 2.2.7. Registering Protobuf schemas with Data Grid Server Register Protobuf schemas with Data Grid Server to perform Ickle queries or convert from application/x-protostream to other media types such as application/json . Prerequisites Generate Protobuf schema with the ProtoStream processor. You can find generated Protobuf schema in the target/<schemaFilePath>/ directory. Have a user with CREATE permissions. Note Security authorization requires CREATE permissions to add schemas. With the default settings, you need the deployer role at minimum. Procedure Add Protobuf schema to Data Grid Server in one of the following ways: Open the Data Grid Console in any browser, select the Schema tab and then Add Protobuf schema . Use the schema command with the --upload= argument from the Data Grid command line interface (CLI). Include the Protobuf schema in the payload of a POST request with the REST API. Use the generated SerializationContextInitializer implementation with a Hot Rod client to register the Protobuf schema, as in the following example: /** * Register generated Protobuf schema with Data Grid Server. * This requires the RemoteCacheManager to be initialized. * * @param initializer The serialization context initializer for the schema. */ private void registerSchemas(SerializationContextInitializer initializer) { // Store schemas in the '___protobuf_metadata' cache to register them. // Using ProtobufMetadataManagerConstants might require the query dependency. final RemoteCache<String, String> protoMetadataCache = remoteCacheManager.getCache(ProtobufMetadataManagerConstants.PROTOBUF_METADATA_CACHE_NAME); // Add the generated schema to the cache. protoMetadataCache.put(initializer.getProtoFileName(), initializer.getProtoFile()); // Ensure the registered Protobuf schemas do not contain errors. // Throw an exception if errors exist. String errors = protoMetadataCache.get(ProtobufMetadataManagerConstants.ERRORS_KEY_SUFFIX); if (errors != null) { throw new IllegalStateException("Some Protobuf schema files contain errors: " + errors + "\nSchema :\n" + initializer.getProtoFileName()); } } Add a JAR file with the SerializationContextInitializer implementation and custom classes to the USDRHDG_HOME/server/lib directory. When you do this, Data Grid Server registers your Protobuf schema at startup. However, you must add the archive to each server installation because the schema are not saved in the ___protobuf_metadata cache or automatically distributed across the cluster. Note You must do this if you require Data Grid Server to perform any application/x-protostream to application/x-java-object conversions, in which case you must also add any JAR files for your POJOs. steps Register the SerializationContextInitializer with your Hot Rod clients, as in the following example: ConfigurationBuilder remoteBuilder = new ConfigurationBuilder(); remoteBuilder.addServer().host(host).port(Integer.parseInt(port)); // Add your generated SerializationContextInitializer implementation. LibraryInitalizer initializer = new LibraryInitalizerImpl(); remoteBuilder.addContextInitializer(initializer); 2.2.8. Manual serialization context initializer implementations Important Data Grid strongly recommends against manually implementing the SerializationContextInitializer or GeneratedSchema interfaces. It is possible to manually implement SerializationContextInitializer or GeneratedSchema interfaces using ProtobufTagMarshaller and RawProtobufMarshaller annotations. However, manual implementations require a lot of tedious overhead and are prone to error. Implementations that you generate with the protostream-processor artifact are a much more efficient and reliable way to configure ProtoStream marshalling. | [
"@ProtoSchema(dependsOn = {org.infinispan.protostream.types.java.CommonTypes, org.infinispan.protostream.types.java.CommonContainerTypes}, schemaFileName = \"library.proto\", schemaFilePath = \"proto\", schemaPackageName = \"example\") public interface LibraryInitalizer extends SerializationContextInitializer { }",
"@ProtoSchema(includeClasses = { Author.class, Book.class, UUIDAdapter.class, java.math.BigInteger }, schemaFileName = \"library.proto\", schemaFilePath = \"proto\", schemaPackageName = \"library\") public interface LibraryInitalizer extends SerializationContextInitializer { }",
"<build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>...</version> <configuration> <annotationProcessorPaths> <annotationProcessorPath> <groupId>org.infinispan.protostream</groupId> <artifactId>protostream-processor</artifactId> <version>...</version> </annotationProcessorPath> </annotationProcessorPaths> </configuration> </plugin> </plugins> </build>",
"import org.infinispan.protostream.annotations.ProtoFactory; import org.infinispan.protostream.annotations.ProtoField; public class Author { @ProtoField(1) final String name; @ProtoField(2) final String surname; @ProtoFactory Author(String name, String surname) { this.name = name; this.surname = surname; } // public Getter methods omitted for brevity }",
"import org.infinispan.protostream.annotations.ProtoFactory; import org.infinispan.protostream.annotations.ProtoField; public class Book { @ProtoField(number = 1) public final UUID id; @ProtoField(number = 2) final String title; @ProtoField(number = 3) final String description; @ProtoField(number = 4, defaultValue = \"0\") final int publicationYear; @ProtoField(number = 5, collectionImplementation = ArrayList.class) final List<Author> authors; @ProtoField(number = 6) public Language language; @ProtoFactory Book(UUID id, String title, String description, int publicationYear, List<Author> authors, Language language) { this.id = id; this.title = title; this.description = description; this.publicationYear = publicationYear; this.authors = authors; this.language = language; } // public Getter methods not included for brevity }",
"import org.infinispan.protostream.annotations.ProtoEnumValue; public enum Language { @ProtoEnumValue(number = 0, name = \"EN\") ENGLISH, @ProtoEnumValue(number = 1, name = \"DE\") GERMAN, @ProtoEnumValue(number = 2, name = \"IT\") ITALIAN, @ProtoEnumValue(number = 3, name = \"ES\") SPANISH, @ProtoEnumValue(number = 4, name = \"FR\") FRENCH; }",
"enum Language { EN = 0; DE = 1; IT = 2; ES = 3; FR = 4; }",
"import java.util.UUID; import org.infinispan.protostream.annotations.ProtoAdapter; import org.infinispan.protostream.annotations.ProtoFactory; import org.infinispan.protostream.annotations.ProtoField; import org.infinispan.protostream.descriptors.Type; /** * Human readable UUID adapter for UUID marshalling */ @ProtoAdapter(UUID.class) public class UUIDAdapter { @ProtoFactory UUID create(String stringUUID) { return UUID.fromString(stringUUID); } @ProtoField(1) String getStringUUID(UUID uuid) { return uuid.toString(); } }",
"@ProtoSchema( includeClasses = { Book.class, Author.class, UUIDAdapter.class, Language.class }, schemaFileName = \"library.proto\", schemaFilePath = \"proto/\", schemaPackageName = \"book_sample\") interface LibraryInitializer extends GeneratedSchema { }",
"@ProtoSchema( includeClasses = SomeClass.class, service = false )",
"<serialization> <context-initializer class=\"org.infinispan.example.LibraryInitializerImpl\"/> <context-initializer class=\"org.infinispan.example.another.SCIImpl\"/> </serialization>",
"GlobalConfigurationBuilder builder = new GlobalConfigurationBuilder(); builder.serialization() .addContextInitializers(new LibraryInitializerImpl(), new SCIImpl());",
"schema --upload=person.proto person",
"POST/rest/v2/schemas/<schema_name>",
"/** * Register generated Protobuf schema with Data Grid Server. * This requires the RemoteCacheManager to be initialized. * * @param initializer The serialization context initializer for the schema. */ private void registerSchemas(SerializationContextInitializer initializer) { // Store schemas in the '___protobuf_metadata' cache to register them. // Using ProtobufMetadataManagerConstants might require the query dependency. final RemoteCache<String, String> protoMetadataCache = remoteCacheManager.getCache(ProtobufMetadataManagerConstants.PROTOBUF_METADATA_CACHE_NAME); // Add the generated schema to the cache. protoMetadataCache.put(initializer.getProtoFileName(), initializer.getProtoFile()); // Ensure the registered Protobuf schemas do not contain errors. // Throw an exception if errors exist. String errors = protoMetadataCache.get(ProtobufMetadataManagerConstants.ERRORS_KEY_SUFFIX); if (errors != null) { throw new IllegalStateException(\"Some Protobuf schema files contain errors: \" + errors + \"\\nSchema :\\n\" + initializer.getProtoFileName()); } }",
"ConfigurationBuilder remoteBuilder = new ConfigurationBuilder(); remoteBuilder.addServer().host(host).port(Integer.parseInt(port)); // Add your generated SerializationContextInitializer implementation. LibraryInitalizer initializer = new LibraryInitalizerImpl(); remoteBuilder.addContextInitializer(initializer);"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/cache_encoding_and_marshalling/marshalling_user_types |
RHOSP director Operator for OpenShift Container Platform | RHOSP director Operator for OpenShift Container Platform Red Hat OpenStack Platform 16.2 Deploying a Red Hat OpenStack Platform overcloud in a Red Hat OpenShift Container Platform cluster OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/rhosp_director_operator_for_openshift_container_platform/index |
12.3. GFS2 Configuration | 12.3. GFS2 Configuration Configuring Samba with the Red Hat Enterprise Linux clustering requires two GFS2 file systems: One small file system for CTDB, and a second file system for the Samba share. This example shows how to create the two GFS2 file systems. Before creating the GFS2 file systems, first create an LVM logical volume for each of the file systems. For information on creating LVM logical volumes, see Logical Volume Manager Administration . This example uses the following logical volumes: /dev/csmb_vg/csmb_lv , which will hold the user data that will be exported by means of a Samba share and should be sized accordingly. This example creates a logical volume that is 100GB in size. /dev/csmb_vg/ctdb_lv , which will store the shared CTDB state information and needs to be 1GB in size. You create clustered volume groups and logical volumes on one node of the cluster only. To create a GFS2 file system on a logical volume, run the mkfs.gfs2 command. You run this command on one cluster node only. To create the file system to host the Samba share on the logical volume /dev/csmb_vg/csmb_lv , execute the following command: The meaning of the parameters is as follows: -j Specifies the number of journals to create in the filesystem. This example uses a cluster with three nodes, so we create one journal per node. -p Specifies the locking protocol. lock_dlm is the locking protocol GFS2 uses for inter-node communication. -t Specifies the lock table name and is of the format cluster_name:fs_name . In this example, the cluster name as specified in the cluster.conf file is csmb , and we use gfs2 as the name for the file system. The output of this command appears as follows: In this example, the /dev/csmb_vg/csmb_lv file system will be mounted at /mnt/gfs2 on all nodes. This mount point must match the value that you specify as the location of the share directory with the path = option in the /etc/samba/smb.conf file, as described in Section 12.5, "Samba Configuration" . To create the file system to host the CTDB state information on the logical volume /dev/csmb_vg/ctdb_lv , execute the following command: Note that this command specifies a different lock table name than the lock table in the example that created the filesystem on /dev/csmb_vg/csmb_lv . This distinguishes the lock table names for the different devices used for the file systems. The output of the mkfs.gfs2 appears as follows: In this example, the /dev/csmb_vg/ctdb_lv file system will be mounted at /mnt/ctdb on all nodes. This mount point must match the value that you specify as the location of the .ctdb.lock file with the CTDB_RECOVERY_LOCK option in the /etc/sysconfig/ctdb file, as described in Section 12.4, "CTDB Configuration" . | [
"mkfs.gfs2 -j3 -p lock_dlm -t csmb:gfs2 /dev/csmb_vg/csmb_lv",
"This will destroy any data on /dev/csmb_vg/csmb_lv. It appears to contain a gfs2 filesystem. Are you sure you want to proceed? [y/n] y Device: /dev/csmb_vg/csmb_lv Blocksize: 4096 Device Size 100.00 GB (26214400 blocks) Filesystem Size: 100.00 GB (26214398 blocks) Journals: 3 Resource Groups: 400 Locking Protocol: \"lock_dlm\" Lock Table: \"csmb:gfs2\" UUID: 94297529-ABG3-7285-4B19-182F4F2DF2D7",
"mkfs.gfs2 -j3 -p lock_dlm -t csmb:ctdb_state /dev/csmb_vg/ctdb_lv",
"This will destroy any data on /dev/csmb_vg/ctdb_lv. It appears to contain a gfs2 filesystem. Are you sure you want to proceed? [y/n] y Device: /dev/csmb_vg/ctdb_lv Blocksize: 4096 Device Size 1.00 GB (262144 blocks) Filesystem Size: 1.00 GB (262142 blocks) Journals: 3 Resource Groups: 4 Locking Protocol: \"lock_dlm\" Lock Table: \"csmb:ctdb_state\" UUID: BCDA8025-CAF3-85BB-B062-CC0AB8849A03"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-GFS2-Configuration-CA |
Chapter 1. Overview | Chapter 1. Overview Read this document to understand how to create, configure, and allocate storage to core services or hosted applications in Red Hat OpenShift Data Foundation. Chapter 2, Storage classes shows you how to create custom storage classes. Chapter 5, Block pools provides you with information on how to create, update and delete block pools. Chapter 6, Configure storage for OpenShift Container Platform services shows you how to use OpenShift Data Foundation for core OpenShift Container Platform services. Chapter 8, Backing OpenShift Container Platform applications with OpenShift Data Foundation provides information about how to configure OpenShift Container Platform applications to use OpenShift Data Foundation. Adding file and object storage to an existing external OpenShift Data Foundation cluster Chapter 10, How to use dedicated worker nodes for Red Hat OpenShift Data Foundation provides information about how to use dedicated worker nodes for Red Hat OpenShift Data Foundation. Chapter 11, Managing Persistent Volume Claims provides information about managing Persistent Volume Claim requests, and automating the fulfillment of those requests. Chapter 12, Reclaiming space on target volumes shows you how to reclaim the actual available storage space. Chapter 14, Volume Snapshots shows you how to create, restore, and delete volume snapshots. Chapter 15, Volume cloning shows you how to create volume clones. Chapter 16, Managing container storage interface (CSI) component placements provides information about setting tolerations to bring up container storage interface component on the nodes. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/managing_and_allocating_storage_resources/overview |
Chapter 1. The Ceph architecture | Chapter 1. The Ceph architecture Red Hat Ceph Storage cluster is a distributed data object store designed to provide excellent performance, reliability and scalability. Distributed object stores are the future of storage, because they accommodate unstructured data, and because clients can use modern object interfaces and legacy interfaces simultaneously. For example: APIs in many languages (C/C++, Java, Python) RESTful interfaces (S3/Swift) Block device interface Filesystem interface The power of Red Hat Ceph Storage cluster can transform your organization's IT infrastructure and your ability to manage vast amounts of data, especially for cloud computing platforms like Red Hat Enterprise Linux OSP. Red Hat Ceph Storage cluster delivers extraordinary scalability-thousands of clients accessing petabytes to exabytes of data and beyond. At the heart of every Ceph deployment is the Red Hat Ceph Storage cluster. It consists of three types of daemons: Ceph OSD Daemon: Ceph OSDs store data on behalf of Ceph clients. Additionally, Ceph OSDs utilize the CPU, memory and networking of Ceph nodes to perform data replication, erasure coding, rebalancing, recovery, monitoring and reporting functions. Ceph Monitor: A Ceph Monitor maintains a master copy of the Red Hat Ceph Storage cluster map with the current state of the Red Hat Ceph Storage cluster. Monitors require high consistency, and use Paxos to ensure agreement about the state of the Red Hat Ceph Storage cluster. Ceph Manager: The Ceph Manager maintains detailed information about placement groups, process metadata and host metadata in lieu of the Ceph Monitor- significantly improving performance at scale. The Ceph Manager handles execution of many of the read-only Ceph CLI queries, such as placement group statistics. The Ceph Manager also provides the RESTful monitoring APIs. Ceph client interfaces read data from and write data to the Red Hat Ceph Storage cluster. Clients need the following data to communicate with the Red Hat Ceph Storage cluster: The Ceph configuration file, or the cluster name (usually ceph ) and the monitor address. The pool name. The user name and the path to the secret key. Ceph clients maintain object IDs and the pool names where they store the objects. However, they do not need to maintain an object-to-OSD index or communicate with a centralized object index to look up object locations. To store and retrieve data, Ceph clients access a Ceph Monitor and retrieve the latest copy of the Red Hat Ceph Storage cluster map. Then, Ceph clients provide an object name and pool name to librados , which computes an object's placement group and the primary OSD for storing and retrieving data using the CRUSH (Controlled Replication Under Scalable Hashing) algorithm. The Ceph client connects to the primary OSD where it may perform read and write operations. There is no intermediary server, broker or bus between the client and the OSD. When an OSD stores data, it receives data from a Ceph client- whether the client is a Ceph Block Device, a Ceph Object Gateway, a Ceph Filesystem or another interface- and it stores the data as an object. Note An object ID is unique across the entire cluster, not just an OSD's storage media. Ceph OSDs store all data as objects in a flat namespace. There are no hierarchies of directories. An object has a cluster-wide unique identifier, binary data, and metadata consisting of a set of name/value pairs. Ceph clients define the semantics for the client's data format. For example, the Ceph block device maps a block device image to a series of objects stored across the cluster. Note Objects consisting of a unique ID, data, and name/value paired metadata can represent both structured and unstructured data, as well as legacy and leading edge data storage interfaces. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/architecture_guide/the-ceph-architecture_arch |
A.2. Strategies for Disk Repartitioning | A.2. Strategies for Disk Repartitioning There are several different ways that a disk can be repartitioned. This section discusses the following possible approaches: Unpartitioned free space is available An unused partition is available Free space in an actively used partition is available Note that this section discusses the aforementioned concepts only theoretically and it does not include any procedures showing how to perform disk repartitioning step-by-step. Such detailed information are beyond the scope of this document. Note Keep in mind that the following illustrations are simplified in the interest of clarity and do not reflect the exact partition layout that you encounter when actually installing Red Hat Enterprise Linux. A.2.1. Using Unpartitioned Free Space In this situation, the partitions already defined do not span the entire hard disk, leaving unallocated space that is not part of any defined partition. The following diagram shows what this might look like: Figure A.8. Disk Drive with Unpartitioned Free Space In the above example, 1 represents an undefined partition with unallocated space and 2 represents a defined partition with allocated space. An unused hard disk also falls into this category. The only difference is that all the space is not part of any defined partition. In any case, you can create the necessary partitions from the unused space. Unfortunately, this scenario, although very simple, is not very likely (unless you have just purchased a new disk just for Red Hat Enterprise Linux). Most pre-installed operating systems are configured to take up all available space on a disk drive (see Section A.2.3, "Using Free Space from an Active Partition" ). A.2.2. Using Space from an Unused Partition In this case, maybe you have one or more partitions that you do not use any longer. The following diagram illustrates such a situation. Figure A.9. Disk Drive with an Unused Partition In the above example, 1 represents an unused partition and 2 represents reallocating an unused partition for Linux. In this situation, you can use the space allocated to the unused partition. You first must delete the partition and then create the appropriate Linux partition(s) in its place. You can delete the unused partition and manually create new partitions during the installation process. A.2.3. Using Free Space from an Active Partition This is the most common situation. It is also, unfortunately, the hardest to handle. The main problem is that, even if you have enough free space, it is presently allocated to a partition that is already in use. If you purchased a computer with pre-installed software, the hard disk most likely has one massive partition holding the operating system and data. Aside from adding a new hard drive to your system, you have two choices: Destructive Repartitioning In this case, the single large partition is deleted and several smaller ones are created instead. Any data held in the original partition is destroyed. This means that making a complete backup is necessary. It is highly recommended to make two backups, use verification (if available in your backup software), and try to read data from the backup before deleting the partition. Warning If an operating system was installed on that partition, it must be reinstalled if you want to use that system as well. Be aware that some computers sold with pre-installed operating systems might not include the installation media to reinstall the original operating system. You should check whether this applies to your system is before you destroy your original partition and its operating system installation. After creating a smaller partition for your existing operating system, you can reinstall software, restore your data, and start your Red Hat Enterprise Linux installation. Figure A.10. Disk Drive Being Destructively Repartitioned In the above example, 1 represents before and 2 represents after. Warning Any data previously present in the original partition is lost. Non-Destructive Repartitioning With non-destructive repartitioning you execute a program that makes a big partition smaller without losing any of the files stored in that partition. This method is usually reliable, but can be very time-consuming on large drives. While the process of non-destructive repartitioning is rather straightforward, there are three steps involved: Compress and backup existing data Resize the existing partition Create new partition(s) Each step is described further in more detail. A.2.3.1. Compress Existing Data As the following figure shows, the first step is to compress the data in your existing partition. The reason for doing this is to rearrange the data such that it maximizes the available free space at the "end" of the partition. Figure A.11. Disk Drive Being Compressed In the above example, 1 represents before and 2 represents after. This step is crucial. Without it, the location of the data could prevent the partition from being resized to the extent desired. Note also that, for one reason or another, some data cannot be moved. If this is the case (and it severely restricts the size of your new partitions), you might be forced to destructively repartition your disk. A.2.3.2. Resize the Existing Partition Figure A.12, "Disk Drive with Partition Resized" shows the actual resizing process. While the actual result of the resizing operation varies depending on the software used, in most cases the newly freed space is used to create an unformatted partition of the same type as the original partition. Figure A.12. Disk Drive with Partition Resized In the above example, 1 represents before and 2 represents after. It is important to understand what the resizing software you use does with the newly freed space, so that you can take the appropriate steps. In the case illustrated here, it would be best to delete the new DOS partition and create the appropriate Linux partition(s). A.2.3.3. Create new partitions As the step implied, it might or might not be necessary to create new partitions. However, unless your resizing software supports systems with Linux installed, it is likely that you must delete the partition that was created during the resizing process. Figure A.13. Disk Drive with Final Partition Configuration In the above example, 1 represents before and 2 represents after. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-disk-partitions-making-room |
Chapter 4. Deploying and testing a model | Chapter 4. Deploying and testing a model 4.1. Preparing a model for deployment After you train a model, you can deploy it by using the OpenShift AI model serving capabilities. To prepare a model for deployment, you must complete the following tasks: Move the model from your workbench to your S3-compatible object storage. Use the connection that you created in the Storing data with connections section and upload the model from a notebook. Convert the model to the portable ONNX format. ONNX allows you to transfer models between frameworks with minimal preparation and without the need for rewriting the models. Prerequisites You created the My Storage connection and have added it to your workbench. Procedure In your JupyterLab environment, open the 2_save_model.ipynb file. Follow the instructions in the notebook to make the model accessible in storage and save it in the portable ONNX format. Verification When you have completed the notebook instructions, the models/fraud/1/model.onnx file is in your object storage and it is ready for your model server to use. step Deploying a model 4.2. Deploying a model Now that the model is accessible in storage and saved in the portable ONNX format, you can use an OpenShift AI model server to deploy it as an API. OpenShift AI offers two options for model serving: Single-model serving - Each model in the project is deployed on its own model server. This platform works well for large models or models that need dedicated resources. Multi-model serving - All models in the project are deployed on the same model server. This platform is suitable for sharing resources amongst deployed models. Multi-model serving is the only option offered in the Red Hat Developer Sandbox environment. For this tutorial, since you are only deploying only one model, you can select either serving type. The steps for deploying the fraud detection model depend on the type of model serving platform you select: Deploying a model on a single-model server Deploying a model on a multi-model server 4.2.1. Deploying a model on a single-model server OpenShift AI single-model servers host only one model. You create a new model server and deploy your model to it. Prerequisites A user with admin privileges has enabled the single-model serving platform on your OpenShift cluster. Procedure In the OpenShift AI dashboard, navigate to the project details page and click the Models tab. Note Depending on how model serving has been configured on your cluster, you might see only one model serving platform option. In the Single-model serving platform tile, click Select single-model . In the form, provide the following values: For Model deployment name , type fraud . For Serving runtime , select OpenVINO Model Server . For Model framework (name - version) , select onnx-1 . For Existing connection , select My Storage . Type the path that leads to the version folder that contains your model file: models/fraud Leave the other fields with the default settings. Click Deploy . Verification Notice the loading symbol under the Status section. The symbol changes to a green checkmark when the deployment completes successfully. step Testing the model API 4.2.2. Deploying a model on a multi-model server OpenShift AI multi-model servers can host several models at once. You create a new model server and deploy your model to it. Prerequisites A user with admin privileges has enabled the multi-model serving platform on your OpenShift cluster. Procedure In the OpenShift AI dashboard, navigate to the project details page and click the Models tab. Note Depending on how model serving has been configured on your cluster, you might see only one model serving platform option. In the Multi-model serving platform tile, click Select multi-model . In the form, provide the following values: For Model server name , type a name, for example Model Server . For Serving runtime , select OpenVINO Model Server . Leave the other fields with the default settings. Click Add . In the Models and model servers list, to the new model server, click Deploy model . In the form, provide the following values: For Model deployment name , type fraud . For Model framework (name - version) , select onnx-1 . For Existing connection , select My Storage . Type the path that leads to the version folder that contains your model file: models/fraud Leave the other fields with the default settings. Click Deploy . Verification Notice the loading symbol under the Status section. The symbol changes to a green checkmark when the deployment completes successfully. step Testing the model API 4.3. Testing the model API Now that you've deployed the model, you can test its API endpoints. Procedure In the OpenShift AI dashboard, navigate to the project details page and click the Models tab. Take note of the model's Inference endpoint URL. You need this information when you test the model API. If the Inference endpoint field contains an Internal endpoint details link, click the link to open a text box that shows the URL details, and then take note of the restUrl value. Return to the JupyterLab environment and try out your new endpoint. If you deployed your model with the multi-model serving platform, follow the directions in 3_rest_requests_multi_model.ipynb to try a REST API call and 4_grpc_requests_multi_model.ipynb to try a gRPC API call. If you deployed your model with the single-model serving platform, follow the directions in 5_rest_requests_single_model.ipynb to try a REST API call. step (Optional) Automating workflows with data science pipelines (Optional) Running a data science pipeline generated from Python code | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/openshift_ai_tutorial_-_fraud_detection_example/deploying-and-testing-a-model |
Chapter 6. Removed functionalities | Chapter 6. Removed functionalities 6.1. Removed che-devfile-registry In this release, the Dev Spaces-specific devfile-registry operand has been removed. For configuring the custom Getting-Started samples, the admin should leverage the dedicated Kubernetes ConfigMap . Find more details in the official documentation . Additional resources CRW-7152 | null | https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.16/html/3.16.0_release_notes_and_known_issues/removed-functionalities |
Chapter 81. tsigkey | Chapter 81. tsigkey This chapter describes the commands under the tsigkey command. 81.1. tsigkey create Create new tsigkey Usage: Table 81.1. Command arguments Value Summary -h, --help Show this help message and exit --name NAME Tsigkey name --algorithm ALGORITHM Tsigkey algorithm --secret SECRET Tsigkey secret --scope SCOPE Tsigkey scope --resource-id RESOURCE_ID Tsigkey resource_id --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 81.2. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 81.3. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 81.4. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 81.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 81.2. tsigkey delete Delete tsigkey Usage: Table 81.6. Positional arguments Value Summary id Tsigkey id Table 81.7. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None 81.3. tsigkey list List tsigkeys Usage: Table 81.8. Command arguments Value Summary -h, --help Show this help message and exit --name NAME Tsigkey name --algorithm ALGORITHM Tsigkey algorithm --scope SCOPE Tsigkey scope --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 81.9. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 81.10. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 81.11. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 81.12. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 81.4. tsigkey set Set tsigkey properties Usage: Table 81.13. Positional arguments Value Summary id Tsigkey id Table 81.14. Command arguments Value Summary -h, --help Show this help message and exit --name NAME Tsigkey name --algorithm ALGORITHM Tsigkey algorithm --secret SECRET Tsigkey secret --scope SCOPE Tsigkey scope --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 81.15. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 81.16. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 81.17. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 81.18. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 81.5. tsigkey show Show tsigkey details Usage: Table 81.19. Positional arguments Value Summary id Tsigkey id Table 81.20. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --edit-managed Edit resources marked as managed. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 81.21. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 81.22. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 81.23. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 81.24. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack tsigkey create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --name NAME --algorithm ALGORITHM --secret SECRET --scope SCOPE --resource-id RESOURCE_ID [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID]",
"openstack tsigkey delete [-h] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] id",
"openstack tsigkey list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--name NAME] [--algorithm ALGORITHM] [--scope SCOPE] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID]",
"openstack tsigkey set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name NAME] [--algorithm ALGORITHM] [--secret SECRET] [--scope SCOPE] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] id",
"openstack tsigkey show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--edit-managed] [--sudo-project-id SUDO_PROJECT_ID] id"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/tsigkey |
Chapter 14. External DNS Operator | Chapter 14. External DNS Operator 14.1. External DNS Operator in OpenShift Container Platform The External DNS Operator deploys and manages ExternalDNS to provide the name resolution for services and routes from the external DNS provider to OpenShift Container Platform. 14.1.1. External DNS Operator The External DNS Operator implements the External DNS API from the olm.openshift.io API group. The External DNS Operator deploys the ExternalDNS using a deployment resource. The ExternalDNS deployment watches the resources such as services and routes in the cluster and updates the external DNS providers. Procedure You can deploy the ExternalDNS Operator on demand from the OperatorHub, this creates a Subscription object. Check the name of an install plan: USD oc -n external-dns-operator get sub external-dns-operator -o yaml | yq '.status.installplan.name' Example output install-zcvlr Check the status of an install plan, the status of an install plan must be Complete : USD oc -n external-dns-operator get ip <install_plan_name> -o yaml | yq .status.phase' Example output Complete Use the oc get command to view the Deployment status: USD oc get -n external-dns-operator deployment/external-dns-operator Example output NAME READY UP-TO-DATE AVAILABLE AGE external-dns-operator 1/1 1 1 23h 14.1.2. External DNS Operator logs You can view External DNS Operator logs by using the oc logs command. Procedure View the logs of the External DNS Operator: USD oc logs -n external-dns-operator deployment/external-dns-operator -c external-dns-operator 14.2. Installing External DNS Operator on cloud providers You can install External DNS Operator on cloud providers such as AWS, Azure and GCP. 14.2.1. Installing the External DNS Operator You can install the External DNS Operator using the OpenShift Container Platform OperatorHub. Procedure Click Operators OperatorHub in the OpenShift Container Platform Web Console. Click External DNS Operator . You can use the Filter by keyword text box or the filter list to search for External DNS Operator from the list of Operators. Select the external-dns-operator namespace. On the External DNS Operator page, click Install . On the Install Operator page, ensure that you selected the following options: Update the channel as stable-v1.0 . Installation mode as A specific name on the cluster . Installed namespace as external-dns-operator . If namespace external-dns-operator does not exist, it gets created during the Operator installation. Select Approval Strategy as Automatic or Manual . Approval Strategy is set to Automatic by default. Click Install . If you select Automatic updates, the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Verification Verify that External DNS Operator shows the Status as Succeeded on the Installed Operators dashboard. 14.3. External DNS Operator configuration parameters The External DNS Operators includes the following configuration parameters: 14.3.1. External DNS Operator configuration parameters The External DNS Operator includes the following configuration parameters: Parameter Description spec Enables the type of a cloud provider. spec: provider: type: AWS 1 aws: credentials: name: aws-access-key 2 1 Defines available options such as AWS, GCP and Azure. 2 Defines a name of the secret which contains credentials for your cloud provider. zones Enables you to specify DNS zones by their domains. If you do not specify zones, ExternalDNS discovers all the zones present in your cloud provider account. zones: - "myzoneid" 1 1 Specifies the IDs of DNS zones. domains Enables you to specify AWS zones by their domains. If you do not specify domains, ExternalDNS discovers all the zones present in your cloud provider account. domains: - filterType: Include 1 matchType: Exact 2 name: "myzonedomain1.com" 3 - filterType: Include matchType: Pattern 4 pattern: ".*\\.otherzonedomain\\.com" 5 1 Instructs ExternalDNS to include the domain specified. 2 Instructs ExtrnalDNS that the domain matching has to be exact as opposed to regular expression match. 3 Defines the exact domain name by which ExternalDNS filters. 4 Sets regex-domain-filter flag in ExternalDNS . You can limit possible domains by using a Regex filter. 5 Defines the regex pattern to be used by ExternalDNS to filter the domains of the target zones. source Enables you to specify the source for the DNS records, Service or Route . source: 1 type: Service 2 service: serviceType: 3 - LoadBalancer - ClusterIP labelFilter: 4 matchLabels: external-dns.mydomain.org/publish: "yes" hostnameAnnotation: "Allow" 5 fqdnTemplate: - "{{.Name}}.myzonedomain.com" 6 1 Defines the settings for the source of DNS records. 2 The ExternalDNS uses Service type as source for creating dns records. 3 Sets service-type-filter flag in ExternalDNS . The serviceType contains the following fields: default : LoadBalancer expected : ClusterIP NodePort LoadBalancer ExternalName 4 Ensures that the controller considers only those resources which matches with label filter. 5 The default value for hostnameAnnotation is Ignore which instructs ExternalDNS to generate DNS records using the templates specified in the field fqdnTemplates . When the value is Allow the DNS records get generated based on the value specified in the external-dns.alpha.kubernetes.io/hostname annotation. 6 External DNS Operator uses a string to generate DNS names from sources that don't define a hostname, or to add a hostname suffix when paired with the fake source. source: type: OpenShiftRoute 1 openshiftRouteOptions: routerName: default 2 labelFilter: matchLabels: external-dns.mydomain.org/publish: "yes" 1 ExternalDNS` uses type route as source for creating dns records. 2 If the source is OpenShiftRoute , then you can pass the Ingress Controller name. The ExternalDNS uses canonical name of Ingress Controller as the target for CNAME record. 14.4. Creating DNS records on AWS You can create DNS records on AWS and AWS GovCloud by using External DNS Operator. 14.4.1. Creating DNS records on an public hosted zone for AWS by using Red Hat External DNS Operator You can create DNS records on a public hosted zone for AWS by using the Red Hat External DNS Operator. You can use the same instructions to create DNS records on a hosted zone for AWS GovCloud. Procedure Check the user. The user must have access to the kube-system namespace. If you don't have the credentials, as you can fetch the credentials from the kube-system namespace to use the cloud provider client: USD oc whoami Example output system:admin Fetch the values from aws-creds secret present in kube-system namespace. USD export AWS_ACCESS_KEY_ID=USD(oc get secrets aws-creds -n kube-system --template={{.data.aws_access_key_id}} | base64 -d) USD export AWS_SECRET_ACCESS_KEY=USD(oc get secrets aws-creds -n kube-system --template={{.data.aws_secret_access_key}} | base64 -d) Get the routes to check the domain: USD oc get routes --all-namespaces | grep console Example output openshift-console console console-openshift-console.apps.testextdnsoperator.apacshift.support console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.testextdnsoperator.apacshift.support downloads http edge/Redirect None Get the list of dns zones to find the one which corresponds to the previously found route's domain: USD aws route53 list-hosted-zones | grep testextdnsoperator.apacshift.support Example output HOSTEDZONES terraform /hostedzone/Z02355203TNN1XXXX1J6O testextdnsoperator.apacshift.support. 5 Create ExternalDNS resource for route source: USD cat <<EOF | oc create -f - apiVersion: externaldns.olm.openshift.io/v1alpha1 kind: ExternalDNS metadata: name: sample-aws 1 spec: domains: - filterType: Include 2 matchType: Exact 3 name: testextdnsoperator.apacshift.support 4 provider: type: AWS 5 source: 6 type: OpenShiftRoute 7 openshiftRouteOptions: routerName: default 8 EOF 1 Defines the name of external DNS resource. 2 By default all hosted zones are selected as potential targets. You can include a hosted zone that you need. 3 The matching of the target zone's domain has to be exact (as opposed to regular expression match). 4 Specify the exact domain of the zone you want to update. The hostname of the routes must be subdomains of the specified domain. 5 Defines the AWS Route53 DNS provider. 6 Defines options for the source of DNS records. 7 Defines OpenShift route resource as the source for the DNS records which gets created in the previously specified DNS provider. 8 If the source is OpenShiftRoute , then you can pass the OpenShift Ingress Controller name. External DNS Operator selects the canonical hostname of that router as the target while creating CNAME record. Check the records created for OCP routes using the following command: USD aws route53 list-resource-record-sets --hosted-zone-id Z02355203TNN1XXXX1J6O --query "ResourceRecordSets[?Type == 'CNAME']" | grep console 14.5. Creating DNS records on Azure You can create DNS records on Azure using External DNS Operator. 14.5.1. Creating DNS records on an public DNS zone for Azure by using Red Hat External DNS Operator You can create DNS records on a public DNS zone for Azure by using Red Hat External DNS Operator. Procedure Check the user. The user must have access to the kube-system namespace. If you don't have the credentials, as you can fetch the credentials from the kube-system namespace to use the cloud provider client: USD oc whoami Example output system:admin Fetch the values from azure-credentials secret present in kube-system namespace. USD CLIENT_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_id}} | base64 -d) USD CLIENT_SECRET=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_secret}} | base64 -d) USD RESOURCE_GROUP=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_resourcegroup}} | base64 -d) USD SUBSCRIPTION_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_subscription_id}} | base64 -d) USD TENANT_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_tenant_id}} | base64 -d) Login to azure with base64 decoded values: USD az login --service-principal -u "USD{CLIENT_ID}" -p "USD{CLIENT_SECRET}" --tenant "USD{TENANT_ID}" Get the routes to check the domain: USD oc get routes --all-namespaces | grep console Example output openshift-console console console-openshift-console.apps.test.azure.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.azure.example.com downloads http edge/Redirect None Get the list of dns zones to find the one which corresponds to the previously found route's domain: USD az network dns zone list --resource-group "USD{RESOURCE_GROUP}" Create ExternalDNS resource for route source: apiVersion: externaldns.olm.openshift.io/v1alpha1 kind: ExternalDNS metadata: name: sample-azure 1 spec: zones: - "/subscriptions/1234567890/resourceGroups/test-azure-xxxxx-rg/providers/Microsoft.Network/dnszones/test.azure.example.com" 2 provider: type: Azure 3 source: openshiftRouteOptions: 4 routerName: default 5 type: OpenShiftRoute 6 EOF 1 Specifies the name of External DNS CR. 2 Define the zone ID. 3 Defines the Azure DNS provider. 4 You can define options for the source of DNS records. 5 If the source is OpenShiftRoute then you can pass the OpenShift Ingress Controller name. External DNS selects the canonical hostname of that router as the target while creating CNAME record. 6 Defines OpenShift route resource as the source for the DNS records which gets created in the previously specified DNS provider. Check the records created for OCP routes using the following command: USD az network dns record-set list -g "USD{RESOURCE_GROUP}" -z test.azure.example.com | grep console Note To create records on private hosted zones on private Azure dns, you need to specify the private zone under zones which populates the provider type to azure-private-dns in the ExternalDNS container args. 14.6. Creating DNS records on GCP You can create DNS records on GCP using External DNS Operator. 14.6.1. Creating DNS records on an public managed zone for GCP by using Red Hat External DNS Operator You can create DNS records on a public managed zone for GCP by using Red Hat External DNS Operator. Procedure Check the user. The user must have access to the kube-system namespace. If you don't have the credentials, as you can fetch the credentials from the kube-system namespace to use the cloud provider client: USD oc whoami Example output system:admin Copy the value of service_account.json in gcp-credentials secret in a file encoded-gcloud.json by running the following command: USD oc get secret gcp-credentials -n kube-system --template='{{USDv := index .data "service_account.json"}}{{USDv}}' | base64 -d - > decoded-gcloud.json Export Google credentials: USD export GOOGLE_CREDENTIALS=decoded-gcloud.json Activate your account by using the following command: USD gcloud auth activate-service-account <client_email as per decoded-gcloud.json> --key-file=decoded-gcloud.json Set your project: USD gcloud config set project <project_id as per decoded-gcloud.json> Get the routes to check the domain: USD oc get routes --all-namespaces | grep console Example output openshift-console console console-openshift-console.apps.test.gcp.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.gcp.example.com downloads http edge/Redirect None Get the list of managed zones to find the zone which corresponds to the previously found route's domain: USD gcloud dns managed-zones list | grep test.gcp.example.com qe-cvs4g-private-zone test.gcp.example.com Create ExternalDNS resource for route source: apiVersion: externaldns.olm.openshift.io/v1alpha1 kind: ExternalDNS metadata: name: sample-gcp 1 spec: domains: - filterType: Include 2 matchType: Exact 3 name: test.gcp.example.com 4 provider: type: GCP 5 source: openshiftRouteOptions: 6 routerName: default 7 type: OpenShiftRoute 8 EOF 1 Specifies the name of External DNS CR. 2 By default all hosted zones are selected as potential targets. You can include a hosted zone that you need. 3 The matching of the target zone's domain has to be exact (as opposed to regular expression match). 4 Specify the exact domain of the zone you want to update. The hostname of the routes must be subdomains of the specified domain. 5 Defines Google Cloud DNS provider. 6 You can define options for the source of DNS records. 7 If the source is OpenShiftRoute then you can pass the OpenShift Ingress Controller name. External DNS selects the canonical hostname of that router as the target while creating CNAME record. 8 Defines OpenShift route resource as the source for the DNS records which gets created in the previously specified DNS provider. Check the records created for OCP routes using the following command: USD gcloud dns record-sets list --zone=qe-cvs4g-private-zone | grep console 14.7. Configuring the cluster-wide proxy on the External DNS Operator You can configure the cluster-wide proxy in the External DNS Operator. After configuring the cluster-wide proxy in the External DNS Operator, Operator Lifecycle Manager (OLM) automatically updates all the deployments of the Operators with the environment variables such as HTTP_PROXY , HTTPS_PROXY , and NO_PROXY . 14.7.1. Configuring the External DNS Operator to trust the certificate authority of the cluster-wide proxy You can configure the External DNS Operator to trust the certificate authority of the cluster-wide proxy. Procedure Create the config map to contain the CA bundle in the external-dns-operator namespace by running the following command: USD oc -n external-dns-operator create configmap trusted-ca To inject the trusted CA bundle into the config map, add the config.openshift.io/inject-trusted-cabundle=true label to the config map by running the following command: USD oc -n external-dns-operator label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true Update the subscription of the External DNS Operator by running the following command: USD oc -n external-dns-operator patch subscription external-dns-operator --type='json' -p='[{"op": "add", "path": "/spec/config", "value":{"env":[{"name":"TRUSTED_CA_CONFIGMAP_NAME","value":"trusted-ca"}]}}]' Verification After the deployment of the External DNS Operator is completed, verify that the trusted CA environment variable is added to the external-dns-operator deployment by running the following command: USD oc -n external-dns-operator exec deploy/external-dns-operator -c external-dns-operator -- printenv TRUSTED_CA_CONFIGMAP_NAME Example output trusted-ca | [
"oc -n external-dns-operator get sub external-dns-operator -o yaml | yq '.status.installplan.name'",
"install-zcvlr",
"oc -n external-dns-operator get ip <install_plan_name> -o yaml | yq .status.phase'",
"Complete",
"oc get -n external-dns-operator deployment/external-dns-operator",
"NAME READY UP-TO-DATE AVAILABLE AGE external-dns-operator 1/1 1 1 23h",
"oc logs -n external-dns-operator deployment/external-dns-operator -c external-dns-operator",
"spec: provider: type: AWS 1 aws: credentials: name: aws-access-key 2",
"zones: - \"myzoneid\" 1",
"domains: - filterType: Include 1 matchType: Exact 2 name: \"myzonedomain1.com\" 3 - filterType: Include matchType: Pattern 4 pattern: \".*\\\\.otherzonedomain\\\\.com\" 5",
"source: 1 type: Service 2 service: serviceType: 3 - LoadBalancer - ClusterIP labelFilter: 4 matchLabels: external-dns.mydomain.org/publish: \"yes\" hostnameAnnotation: \"Allow\" 5 fqdnTemplate: - \"{{.Name}}.myzonedomain.com\" 6",
"source: type: OpenShiftRoute 1 openshiftRouteOptions: routerName: default 2 labelFilter: matchLabels: external-dns.mydomain.org/publish: \"yes\"",
"oc whoami",
"system:admin",
"export AWS_ACCESS_KEY_ID=USD(oc get secrets aws-creds -n kube-system --template={{.data.aws_access_key_id}} | base64 -d) export AWS_SECRET_ACCESS_KEY=USD(oc get secrets aws-creds -n kube-system --template={{.data.aws_secret_access_key}} | base64 -d)",
"oc get routes --all-namespaces | grep console",
"openshift-console console console-openshift-console.apps.testextdnsoperator.apacshift.support console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.testextdnsoperator.apacshift.support downloads http edge/Redirect None",
"aws route53 list-hosted-zones | grep testextdnsoperator.apacshift.support",
"HOSTEDZONES terraform /hostedzone/Z02355203TNN1XXXX1J6O testextdnsoperator.apacshift.support. 5",
"cat <<EOF | oc create -f - apiVersion: externaldns.olm.openshift.io/v1alpha1 kind: ExternalDNS metadata: name: sample-aws 1 spec: domains: - filterType: Include 2 matchType: Exact 3 name: testextdnsoperator.apacshift.support 4 provider: type: AWS 5 source: 6 type: OpenShiftRoute 7 openshiftRouteOptions: routerName: default 8 EOF",
"aws route53 list-resource-record-sets --hosted-zone-id Z02355203TNN1XXXX1J6O --query \"ResourceRecordSets[?Type == 'CNAME']\" | grep console",
"oc whoami",
"system:admin",
"CLIENT_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_id}} | base64 -d) CLIENT_SECRET=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_secret}} | base64 -d) RESOURCE_GROUP=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_resourcegroup}} | base64 -d) SUBSCRIPTION_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_subscription_id}} | base64 -d) TENANT_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_tenant_id}} | base64 -d)",
"az login --service-principal -u \"USD{CLIENT_ID}\" -p \"USD{CLIENT_SECRET}\" --tenant \"USD{TENANT_ID}\"",
"oc get routes --all-namespaces | grep console",
"openshift-console console console-openshift-console.apps.test.azure.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.azure.example.com downloads http edge/Redirect None",
"az network dns zone list --resource-group \"USD{RESOURCE_GROUP}\"",
"apiVersion: externaldns.olm.openshift.io/v1alpha1 kind: ExternalDNS metadata: name: sample-azure 1 spec: zones: - \"/subscriptions/1234567890/resourceGroups/test-azure-xxxxx-rg/providers/Microsoft.Network/dnszones/test.azure.example.com\" 2 provider: type: Azure 3 source: openshiftRouteOptions: 4 routerName: default 5 type: OpenShiftRoute 6 EOF",
"az network dns record-set list -g \"USD{RESOURCE_GROUP}\" -z test.azure.example.com | grep console",
"oc whoami",
"system:admin",
"oc get secret gcp-credentials -n kube-system --template='{{USDv := index .data \"service_account.json\"}}{{USDv}}' | base64 -d - > decoded-gcloud.json",
"export GOOGLE_CREDENTIALS=decoded-gcloud.json",
"gcloud auth activate-service-account <client_email as per decoded-gcloud.json> --key-file=decoded-gcloud.json",
"gcloud config set project <project_id as per decoded-gcloud.json>",
"oc get routes --all-namespaces | grep console",
"openshift-console console console-openshift-console.apps.test.gcp.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.gcp.example.com downloads http edge/Redirect None",
"gcloud dns managed-zones list | grep test.gcp.example.com qe-cvs4g-private-zone test.gcp.example.com",
"apiVersion: externaldns.olm.openshift.io/v1alpha1 kind: ExternalDNS metadata: name: sample-gcp 1 spec: domains: - filterType: Include 2 matchType: Exact 3 name: test.gcp.example.com 4 provider: type: GCP 5 source: openshiftRouteOptions: 6 routerName: default 7 type: OpenShiftRoute 8 EOF",
"gcloud dns record-sets list --zone=qe-cvs4g-private-zone | grep console",
"oc -n external-dns-operator create configmap trusted-ca",
"oc -n external-dns-operator label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true",
"oc -n external-dns-operator patch subscription external-dns-operator --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/config\", \"value\":{\"env\":[{\"name\":\"TRUSTED_CA_CONFIGMAP_NAME\",\"value\":\"trusted-ca\"}]}}]'",
"oc -n external-dns-operator exec deploy/external-dns-operator -c external-dns-operator -- printenv TRUSTED_CA_CONFIGMAP_NAME",
"trusted-ca"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/networking/external-dns-operator-1 |
Chapter 12. Working with quotas | Chapter 12. Working with quotas A resource quota , defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per project. It can limit the quantity of objects that can be created in a project by type, as well as the total amount of compute resources and storage that may be consumed by resources in that project. An object quota count places a defined quota on all standard namespaced resource types. When using a resource quota, an object is charged against the quota if it exists in server storage. These types of quotas are useful to protect against exhaustion of storage resources. This guide describes how resource quotas work and how developers can work with and view them. 12.1. Viewing a quota You can view usage statistics related to any hard limits defined in a project's quota by navigating in the web console to the project's Quota page. You can also use the CLI to view quota details. Procedure Get the list of quotas defined in the project. For example, for a project called demoproject : USD oc get quota -n demoproject Example output NAME AGE REQUEST LIMIT besteffort 4s pods: 1/2 compute-resources-time-bound 10m pods: 0/2 limits.cpu: 0/1, limits.memory: 0/1Gi core-object-counts 109s configmaps: 2/10, persistentvolumeclaims: 1/4, replicationcontrollers: 1/20, secrets: 9/10, services: 2/10 Describe the quota you are interested in, for example the core-object-counts quota: USD oc describe quota core-object-counts -n demoproject Example output Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10 12.2. Resources managed by quotas The following describes the set of compute resources and object types that can be managed by a quota. Note A pod is in a terminal state if status.phase in (Failed, Succeeded) is true. Table 12.1. Compute resources managed by quota Resource Name Description cpu The sum of CPU requests across all pods in a non-terminal state cannot exceed this value. cpu and requests.cpu are the same value and can be used interchangeably. memory The sum of memory requests across all pods in a non-terminal state cannot exceed this value. memory and requests.memory are the same value and can be used interchangeably. requests.cpu The sum of CPU requests across all pods in a non-terminal state cannot exceed this value. cpu and requests.cpu are the same value and can be used interchangeably. requests.memory The sum of memory requests across all pods in a non-terminal state cannot exceed this value. memory and requests.memory are the same value and can be used interchangeably. limits.cpu The sum of CPU limits across all pods in a non-terminal state cannot exceed this value. limits.memory The sum of memory limits across all pods in a non-terminal state cannot exceed this value. Table 12.2. Storage resources managed by quota Resource Name Description requests.storage The sum of storage requests across all persistent volume claims in any state cannot exceed this value. persistentvolumeclaims The total number of persistent volume claims that can exist in the project. <storage-class-name>.storageclass.storage.k8s.io/requests.storage The sum of storage requests across all persistent volume claims in any state that have a matching storage class, cannot exceed this value. <storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims The total number of persistent volume claims with a matching storage class that can exist in the project. ephemeral-storage The sum of local ephemeral storage requests across all pods in a non-terminal state cannot exceed this value. ephemeral-storage and requests.ephemeral-storage are the same value and can be used interchangeably. requests.ephemeral-storage The sum of ephemeral storage requests across all pods in a non-terminal state cannot exceed this value. ephemeral-storage and requests.ephemeral-storage are the same value and can be used interchangeably. limits.ephemeral-storage The sum of ephemeral storage limits across all pods in a non-terminal state cannot exceed this value. Table 12.3. Object counts managed by quota Resource Name Description pods The total number of pods in a non-terminal state that can exist in the project. replicationcontrollers The total number of ReplicationControllers that can exist in the project. resourcequotas The total number of resource quotas that can exist in the project. services The total number of services that can exist in the project. services.loadbalancers The total number of services of type LoadBalancer that can exist in the project. services.nodeports The total number of services of type NodePort that can exist in the project. secrets The total number of secrets that can exist in the project. configmaps The total number of ConfigMap objects that can exist in the project. persistentvolumeclaims The total number of persistent volume claims that can exist in the project. openshift.io/imagestreams The total number of imagestreams that can exist in the project. 12.3. Quota scopes Each quota can have an associated set of scopes . A quota only measures usage for a resource if it matches the intersection of enumerated scopes. Adding a scope to a quota restricts the set of resources to which that quota can apply. Specifying a resource outside of the allowed set results in a validation error. Scope Description BestEffort Match pods that have best effort quality of service for either cpu or memory . NotBestEffort Match pods that do not have best effort quality of service for cpu and memory . A BestEffort scope restricts a quota to limiting the following resources: pods A NotBestEffort scope restricts a quota to tracking the following resources: pods memory requests.memory limits.memory cpu requests.cpu limits.cpu 12.4. Quota enforcement After a resource quota for a project is first created, the project restricts the ability to create any new resources that may violate a quota constraint until it has calculated updated usage statistics. After a quota is created and usage statistics are updated, the project accepts the creation of new content. When you create or modify resources, your quota usage is incremented immediately upon the request to create or modify the resource. When you delete a resource, your quota use is decremented during the full recalculation of quota statistics for the project. A configurable amount of time determines how long it takes to reduce quota usage statistics to their current observed system value. If project modifications exceed a quota usage limit, the server denies the action, and an appropriate error message is returned to the user explaining the quota constraint violated, and what their currently observed usage statistics are in the system. 12.5. Requests versus limits When allocating compute resources, each container might specify a request and a limit value each for CPU, memory, and ephemeral storage. Quotas can restrict any of these values. If the quota has a value specified for requests.cpu or requests.memory , then it requires that every incoming container make an explicit request for those resources. If the quota has a value specified for limits.cpu or limits.memory , then it requires that every incoming container specify an explicit limit for those resources. | [
"oc get quota -n demoproject",
"NAME AGE REQUEST LIMIT besteffort 4s pods: 1/2 compute-resources-time-bound 10m pods: 0/2 limits.cpu: 0/1, limits.memory: 0/1Gi core-object-counts 109s configmaps: 2/10, persistentvolumeclaims: 1/4, replicationcontrollers: 1/20, secrets: 9/10, services: 2/10",
"oc describe quota core-object-counts -n demoproject",
"Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10"
] | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/building_applications/working-with-quotas |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_ibm_cloud/making-open-source-more-inclusive |
Chapter 1. Overview | Chapter 1. Overview The Java software development kit is a collection of classes that allows you to interact with the Red Hat Virtualization Manager in Java-based projects. By downloading these classes and adding them to your project, you can access a range of functionality for high-level automation of administrative tasks. Red Hat Virtualization provides two versions of the Java software development kit: Version 3 The V3 Java software development kit provides backwards compatibility with the class and method structure provided in the Java software development kit as of the latest release of Red Hat Enterprise Virtualization 3.6. Applications written using the Java software development kit from Red Hat Enterprise Virtualization 3.6 can be used with this version without modification. Warning Version 3 is no longer supported. The information and examples specific to version 3 are provided as a reference only. Migrate to version 4 for continued support. Version 4 The V4 Java software development kit provides an updated set of class and method names and signatures. Applications written using the Java software development kit from Red Hat Enterprise Virtualization 3.6 must be updated before they can be used with this version. Either version of the Java software development kit can be used in a Red Hat Virtualization environment as required by installing the corresponding package and adding the required libraries to your Java project. 1.1. Prerequisites To install the Java software development kit, you must have: A system where Red Hat Enterprise Linux 7 is installed. Both the Server and Workstation variants are supported. A subscription to Red Hat Virtualization entitlements. Important The software development kit is an interface for the Red Hat Virtualization REST API. As such, you must use the version of the software development kit that corresponds to the version of your Red Hat Virtualization environment. For example, if you are using Red Hat Virtualization 4.2, you must use the version of the software development kit designed for 4.1. 1.2. Installing the Java Software Development Kit Install the Java software development kit and accompanying documentation. Installing the Java Software Development Kit Enable the repositories: Install the required packages: For V3: The V3 Java software development kit and accompanying documentation are downloaded to the /usr/share/java/rhevm-sdk-java directory and can be added to Java projects. For V4: The V4 Java software development kit and accompanying documentation are downloaded to the /usr/share/java/java-ovirt-engine-sdk4 directory and can be added to Java projects. 1.3. Dependencies To use the Java software development kit in Java applications, you must add the following JAR files to the class path of those applications: commons-beanutils.jar commons-codec.jar httpclient.jar httpcore.jar jakarta-commons-logging.jar log4j.jar The packages that provide these JAR files are installed as dependencies to the ovirt-engine-sdk-java package. By default, they are available in the /usr/share/java directory on Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 systems. 1.4. Configuring SSL The Red Hat Virtualization Manager Java SDK provides full support for HTTP over Secure Socket Layer (SSL) and the IETF Transport Layer Security (TLS) protocol using the Java Secure Socket Extension (JSSE). JSSE has been integrated into the Java 2 platform as of version 1.4 and works with the Java SDK out of the box. On earlier Java 2 versions, JSSE must be manually installed and configured. 1.4.1. Configuring SSL The following procedure outlines how to configure SSL using the Java SDK. Configuring SSL Download the certificate used by the Red Hat Virtualization Manager. Note By default, the location of the certificate used by the Red Hat Virtualization Manager is in /etc/pki/ovirt-engine/ca.pem . Create a truststore: Specify the trustStoreFile and trustStorePassword arguments when constructing an instance of the Api or Connection object: Note If you do not specify the trustStoreFile option when creating a connection, the Java SDK attempts to use the default truststore specified by the system variable javax.net.ssl.trustStore . If this system variable does not specify a truststore, the Java SDK attempts to use a truststore specified in USDJAVA_HOME/lib/security/jssecacerts or USDJAVA_HOME/lib/security/cacerts . 1.4.2. Host Verification By default, the identity of the host name in the certificate is verified when attempting to open a connection to the Red Hat Virtualization Manager. You can disable verification by passing the following argument when constructing an instance of the Connection class: Important This method should not be used for production systems due to security reasons, unless it is a conscious decision and you are aware of the security implications of not verifying host identity. | [
"subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-rhv-4.3-manager-rpms --enable=jb-eap-7.2-for-rhel-7-server-rpms",
"yum install ovirt-engine-sdk-java ovirt-engine-sdk-javadoc",
"yum install java-ovirt-engine-sdk4",
"keytool -import -alias \"server.crt truststore\" -file ca.crt -keystore server.truststore",
"myBuilder.trustStoreFile(\"/home/username/server.truststore\"); myBuilder.trustStorePassword(\"p@ssw0rd\");",
"myBuilder.insecure(true);"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/java_sdk_guide/chap-overview |
Chapter 3. Deploying an Overcloud with the Bare Metal Service | Chapter 3. Deploying an Overcloud with the Bare Metal Service For full details about overcloud deployment with the director, see Director Installation and Usage . This chapter covers only the deployment steps specific to ironic. 3.1. Creating the Ironic template Use an environment file to deploy the overcloud with the Bare Metal service enabled. A template is located on the director node at /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml . Filling in the template Additional configuration can be specified either in the provided template or in an additional yaml file, for example ~/templates/ironic.yaml . For a hybrid deployment with both bare metal and virtual instances, you must add AggregateInstanceExtraSpecsFilter to the list of NovaSchedulerDefaultFilters . If you have not set NovaSchedulerDefaultFilters anywhere, you can do so in ironic.yaml. For an example, see Section 3.4, "Example Templates" . Note If you are using SR-IOV, NovaSchedulerDefaultFilters is already set in tripleo-heat-templates/environments/neutron-sriov.yaml . Append AggregateInstanceExtraSpecsFilter to this list. The type of cleaning that occurs before and between deployments is set by IronicCleaningDiskErase . By default, this is set to 'full' by deployment/ironic/ironic-conductor-container-puppet.yaml . Setting this to 'metadata' can substantially speed up the process, as it cleans only the partition table, however, since the deployment will be less secure in a multi-tenant environment, you should do this only in a trusted tenant environment. You can add drivers with the IronicEnabledHardwareTypes parameter. By default, ipmi and redfish are enabled. For a full list of configuration parameters, see Bare Metal in the Overcloud Parameters guide. 3.2. Configuring the undercloud for bare metal provisioning over IPv6 Important This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . If you have IPv6 nodes and infrastructure, you can configure the undercloud and the provisioning network to use IPv6 instead of IPv4 so that director can provision and deploy Red Hat OpenStack Platform onto IPv6 nodes. However, there are some considerations: Stateful DHCPv6 is available only with a limited set of UEFI firmware. For more information, see Bugzilla #1575026 . Dual stack IPv4/6 is not available. Tempest validations might not perform correctly. IPv4 to IPv6 migration is not available during upgrades. Modify the undercloud.conf file to enable IPv6 provisioning in Red Hat OpenStack Platform. Prerequisites An IPv6 address on the undercloud. For more information, see Configuring an IPv6 address on the undercloud in the IPv6 Networking for the Overcloud guide. Procedure Copy the sample undercloud.conf file, or modify your existing undercloud.conf file. Set the following parameter values in the undercloud.conf file: Set ipv6_address_mode to dhcpv6-stateless or dhcpv6-stateful if your NIC supports stateful DHCPv6 with Red Hat OpenStack Platform. For more information about stateful DHCPv6 availability, see Bugzilla #1575026 . Set enable_routed_networks to true if you do not want the undercloud to create a router on the provisioning network. In this case, the data center router must provide router advertisements. Otherwise, set this value to false . Set local_ip to the IPv6 address of the undercloud. Use IPv6 addressing for the undercloud interface parameters undercloud_public_host and undercloud_admin_host . In the [ctlplane-subnet] section, use IPv6 addressing in the following parameters: cidr dhcp_start dhcp_end gateway inspection_iprange In the [ctlplane-subnet] section, set an IPv6 nameserver for the subnet in the dns_nameservers parameter. 3.3. Network Configuration If you use the default flat bare metal network, you must create a bridge br-baremetal for ironic to use. You can specify this in an additional template: ~/templates/network-environment.yaml You can configure this bridge either in the provisioning network (control plane) of the controllers, so that you can reuse this network as the bare metal network, or add a dedicated network. The configuration requirements are the same, however the bare metal network cannot be VLAN-tagged, as it is used for provisioning. ~/templates/nic-configs/controller.yaml Note The Bare Metal service in the overcloud is designed for a trusted tenant environment, as the bare metal nodes have direct access to the control plane network of your OpenStack installation. 3.3.1. Configuring a custom IPv4 provisioning network The default flat provisioning network can introduce security concerns in a customer environment as a tenant can interfere with the undercloud network. To prevent this risk, you can configure a custom composable bare metal provisioning network for ironic services that does not have access to the control plane: Configure the shell to access Identity as the administrative user: Copy the network_data.yaml file: Edit the new network_data.yaml file and add a new network for IPv4 overcloud provisioning: Update the network_environments.yaml and nic-configs/controller.yaml files to use the new network. In the network_environments.yaml file, remap Ironic networks: In the nic-configs/controller.yaml file, add an interface and necessary parameters: Copy the roles_data.yaml file: Edit the new roles_data.yaml and add the new network for the controller: Include the new network_data.yaml and roles_data.yaml files in the deploy command: 3.3.2. Configuring a custom IPv6 provisioning network Important This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . Create a custom IPv6 provisioning network to provision and deploy the overcloud over IPv6. Procedure Configure the shell to access Identity as the administrative user: Copy the network_data.yaml file: Edit the new network_data.yaml file and add a new network for overcloud provisioning: Replace USDIPV6_ADDRESS with the IPv6 address of your IPv6 subnet. Replace USDIPV6_MASK with the IPv6 network mask for your IPv6 subnet. Replace USDIPV6_START_ADDRESS and USDIPV6_END_ADDRESS with the IPv6 range that you want to use for address allocation. Replace USDIPV6_GW_ADDRESS with the IPv6 address of your gateway. Create a new file network-environment.yaml and define IPv6 settings for the provisioning network: Remap the ironic networks to use the new IPv6 provisioning network: Set the IronicIpVersion parameter to 6 : Set the RabbitIPv6 , MysqlIPv6 , and RedisIPv6 parameters to True : Set the ControlPlaneSubnetCidr parameter to the subnet IPv6 mask length for the provisioning network: Set the ControlPlaneDefaultRoute parameter to the IPv6 address of the gateway router for the provisioning network: Add an interface and necessary parameters to the nic-configs/controller.yaml file: Copy the roles_data.yaml file: Edit the new roles_data.yaml and add the new network for the controller: When you deploy the overcloud, include the new network_data.yaml and roles_data.yaml files in the deployment command with the -n and -r options, and the network-environment.yaml file with the -e option: For more information about IPv6 network configuration, see Configuring the network in the IPv6 Networking for the Overcloud guide. 3.4. Example Templates The following is an example template file. This file might not meet the requirements of your environment. Before using this example, ensure that it does not interfere with any existing configuration in your environment. ~/templates/ironic.yaml parameter_defaults: NovaSchedulerDefaultFilters: - RetryFilter - AggregateInstanceExtraSpecsFilter - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter IronicCleaningDiskErase: metadata In this example: The AggregateInstanceExtraSpecsFilter allows both virtual and bare metal instances, for a hybrid deployment. Disk cleaning that is done before and between deployments erases only the partition table (metadata). 3.5. Enabling Ironic Introspection in the Overcloud To enable Bare Metal introspection, include both the following files in the deploy command: For deployments using OVN ironic-overcloud.yaml ironic-inspector.yaml For deployments using OVS ironic.yaml ironic-inspector.yaml You can find these files in the /usr/share/openstack-tripleo-heat-templates/environments/services directory. Use the following example to include configuration details for the ironic inspector that correspond to your environment: IronicInspectorSubnets This parameter can contain multiple ranges and works with both spine and leaf. IPAImageURLs This parameter contains details about the IPA kernel and ramdisk. In most cases, you can use the same images that you use on the undercloud. If you omit this parameter, place alternatives on each controller. IronicInspectorInterface Use this parameter to specify the bare metal network interface. Note If you use a composable Ironic or IronicConductor role, you must include the IronicInspector service in the Ironic role in your roles file. 3.6. Deploying the Overcloud To enable the Bare Metal service, include your ironic environment files with the -e option when deploying or redeploying the overcloud, along with the rest of your overcloud configuration. For example: For more information about deploying the overcloud, see Deployment command options and Including Environment Files in Overcloud Creation in the Director Installation and Usage guide. For more information about deploying the overcloud over IPv6, see Setting up your environment and Creating the overcloud in the IPv6 Networking for the Overcloud guide. 3.7. Testing the Bare Metal Service You can use the OpenStack Integration Test Suite to validate your Red Hat OpenStack deployment. For more information, see the OpenStack Integration Test Suite Guide . Additional Ways to Verify the Bare Metal Service: Configure the shell to access Identity as the administrative user: Check that the nova-compute service is running on the controller nodes: If you have changed the default ironic drivers, ensure that the required drivers are enabled: Ensure that the ironic endpoints are listed: | [
"ipv6_address_mode = dhcpv6-stateless enable_routed_networks: false local_ip = <ipv6-address> undercloud_admin_host = <ipv6-address> undercloud_public_host = <ipv6-address> [ctlplane-subnet] cidr = <ipv6-address>::<ipv6-mask> dhcp_start = <ipv6-address> dhcp_end = <ipv6-address> dns_nameservers = <ipv6-dns> gateway = <ipv6-address> inspection_iprange = <ipv6-address>,<ipv6-address>",
"parameter_defaults: NeutronBridgeMappings: datacentre:br-ex,baremetal:br-baremetal NeutronFlatNetworks: datacentre,baremetal",
"network_config: - type: ovs_bridge name: br-baremetal use_dhcp: false members: - type: interface name: eth1",
"source ~/stackrc",
"(undercloud) [stack@host01 ~]USD cp /usr/share/openstack-tripleo-heat-templates/network_data.yaml .",
"custom network for overcloud provisioning - name: OcProvisioning name_lower: oc_provisioning vip: true vlan: 205 ip_subnet: '172.23.3.0/24' allocation_pools: [{'start': '172.23.3.10', 'end': '172.23.3.200'}]",
"ServiceNetMap: IronicApiNetwork: oc_provisioning IronicNetwork: oc_provisioning",
"USDnetwork_config: - type: vlan vlan_id: get_param: OcProvisioningNetworkVlanID addresses: - ip_netmask: get_param: OcProvisioningIpSubnet",
"(undercloud) [stack@host01 ~]USD cp /usr/share/openstack-tripleo-heat-templates/roles_data.yaml .",
"networks: OcProvisioning: subnet: oc_provisioning_subnet",
"-n /home/stack/network_data.yaml -r /home/stack/roles_data.yaml \\",
"source ~/stackrc",
"cp /usr/share/openstack-tripleo-heat-templates/network_data.yaml .",
"custom network for IPv6 overcloud provisioning - name: OcProvisioningIPv6 vip: true name_lower: oc_provisioning_ipv6 vlan: 10 ipv6: true ipv6_subnet: 'USDIPV6_SUBNET_ADDRESS/USDIPV6_MASK' ipv6_allocation_pools: [{'start': 'USDIPV6_START_ADDRESS', 'end': 'USDIPV6_END_ADDRESS'}] gateway_ipv6: 'USDIPV6_GW_ADDRESS'",
"touch /home/stack/network-environment.yaml`",
"ServiceNetMap: IronicApiNetwork: oc_provisioning_ipv6 IronicNetwork: oc_provisioning_ipv6",
"parameter_defaults: IronicIpVersion: 6",
"parameter_defaults: RabbitIPv6: True MysqlIPv6: True RedisIPv6: True",
"parameter_defaults: ControlPlaneSubetCidr: '64'",
"parameter_defaults: ControlPlaneDefaultRoute: <ipv6-address>",
"USDnetwork_config: - type: vlan vlan_id: get_param: OcProvisioningIPv6NetworkVlanID addresses: - ip_netmask: get_param: OcProvisioningIPv6IpSubnet",
"(undercloud) [stack@host01 ~]USD cp /usr/share/openstack-tripleo-heat-templates/roles_data.yaml .",
"networks: - OcProvisioningIPv6",
"sudo openstack overcloud deploy --templates -n /home/stack/network_data.yaml -r /home/stack/roles_data.yaml -e /home/stack/network-environment.yaml",
"parameter_defaults: NovaSchedulerDefaultFilters: - RetryFilter - AggregateInstanceExtraSpecsFilter - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter IronicCleaningDiskErase: metadata",
"parameter_defaults: IronicInspectorSubnets: - ip_range: 192.168.101.201,192.168.101.250 IPAImageURLs: '[\"http://192.168.24.1:8088/agent.kernel\", \"http://192.168.24.1:8088/agent.ramdisk\"]' IronicInspectorInterface: 'br-baremetal'",
"ServicesDefault: OS::TripleO::Services::IronicInspector",
"openstack overcloud deploy --templates -e ~/templates/node-info.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e ~/templates/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml -e ~/templates/ironic.yaml \\",
"source ~/overcloudrc",
"openstack compute service list -c Binary -c Host -c Status",
"openstack baremetal driver list",
"openstack catalog list"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/bare_metal_provisioning/sect-deploy |
4.5. RESTful Service Description Language (RSDL) | 4.5. RESTful Service Description Language (RSDL) RESTful Service Description Language (RSDL) provides a description of the structure and elements in the REST API in one whole XML specification. Invoke the RSDL using the following request. This produces an XML document in the following format: Table 4.5. RSDL Structure Elements Element Description description A plain text description of the RSDL document. version The API version, including major release, minor release, build and revision . schema A link to the XML schema (XSD) file. links Defines each link in the API. Each link element contains the following a structure: Table 4.6. RSDL Link Structure Elements Element Description link A URI for API requests. Includes a URI attribute ( href ) and a relationship type attribute ( rel ). request Defines the request properties required for the link. http_method The method type to access this link. Includes the standard HTTP methods for REST API access: GET , POST , PUT and DELETE . headers Defines the headers for the HTTP request. Contains a series of header elements, which each contain a header name and value to define the header. body Defines the body for the HTTP request. Contains a resource type and a parameter_set , which contains a sets of parameter elements with attributes to define whether they are required for a request and the data type . The parameter element also includes a name element to define the Red Hat Virtualization Manager property to modify and also a further parameter_set subset if type is set to collection . response Defines the output for the HTTP request. Contains a type element to define the resource structure to output. Use the RSDL in your applications as a method to map all links and parameter requirements for controlling a Red Hat Virtualization environment. | [
"GET /ovirt-engine/api?rsdl HTTP/1.1 Accept: application/xml",
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <rsdl href=\"/ovirt-engine/api?rsdl\" rel=\"rsdl\"> <description>...</description> <version major=\"4\" minor=\"0\" build=\"0\" revision=\"0\"/> <schema href=\"/ovirt-engine/api?schema\" rel=\"schema\"> <name>...</name> <description>...</description> </schema> <links> <link href=\"/ovirt-engine/api/capabilities\" rel=\"get\"> </link> </links> </rsdl>",
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <rsdl href=\"/ovirt-engine/api?rsdl\" rel=\"rsdl\"> <links> <link href=\"/ovirt-engine/api/...\" rel=\"...\"> <request> <http_method>...</http_method> <headers> <header> <name>...</name> <value>...</value> </header> </headers> <body> <type>...</type> <parameters_set> <parameter required=\"...\" type=\"...\"> <name>...</name> </parameter> </parameters_set> </body> </request> <response> <type>...</type> </response> </link> </links> </rsdl>"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/restful_service_description_language_rsdl |
5.6. Configuring Scope for the Referential Integrity | 5.6. Configuring Scope for the Referential Integrity If an entry is deleted, the references to it are deleted or modified to reflect the change. When this update is applied to all entries and all groups, it can impact performance and prevents flexibility of restricting the referential integrity to selected subtrees. Defining a scope addresses this problem. For example, there may be one suffix, dc=example,dc=com , containing two subtrees: ou=active users,dc=example,dc=com and ou=deleted users,dc=example,dc=com . Entries in deleted users should not be handled for purposes of referential integrity. 5.6.1. Parameters That Control the Referential Integrity Scope The following three parameters can be used to define the scope in the Referential Integrity Postoperation plug-in configuration: nsslapd-pluginEntryScope This multi-value parameter controls the scope of the entry that is deleted or renamed. It defines the subtree in which the Referential Integrity Postoperation plug-in looks for the delete or rename operations of a user entry. If a user is deleted or renamed that does not exist under the defined subtree, the plug-in ignores the operation. The parameter allows you to specify to which branches of the database the plug-in should apply the operation. nsslapd-pluginExcludeEntryScope This parameter also controls the scope of the entry that is deleted or renamed. It defines the subtree in which the Referential Integrity Postoperation plug-in ignores any operations for deleting or renaming a user. nsslapd-pluginContainerScope This parameter controls the scope of groups in which references are updated. After a user is deleted, the Referential Integrity Postoperation plug-in looks for the groups to which the user belongs and updates them accordingly. This parameter specifies which branch the plug-in searches for the groups to which the user belongs. The Referential Integrity Postoperation plug-in only updates groups that are under the specified container branch, and leaves all other groups not updated. 5.6.2. Displaying the Referential Integrity Scope Using the Command Line The following commands show how to display the scope settings using the command line: 5.6.3. Displaying the Referential Integrity Scope Using the Web Console The following procedure shows how to display the scope settings using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Plugins menu. Select the Referential Integrity plug-in. See the Entry Scope , Exclude Entry Scope , and Container Scope fields for the currently configured scope. 5.6.4. Configuring the Referential Integrity Scope Using the Command Line To configure the referential integrity scope using the command line: Optionally, display the scope settings. See Section 5.6.2, "Displaying the Referential Integrity Scope Using the Command Line" . The following commands show how to configure the individual referential integrity scope settings using the command line: To set a distinguished name (DN): To the nsslapd-pluginEntryScope parameter: To the nsslapd-pluginExcludeEntryScope parameter: To the nsslapd-pluginContainerScope parameter: To remove a DN: From the nsslapd-pluginEntryScope parameter: From the nsslapd-pluginExcludeEntryScope parameter: From the nsslapd-pluginContainerScope parameter: Restart the instance: 5.6.5. Configuring the Referential Integrity Scope Using the Web Console To configure the referential integrity scope using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Select the Plugins menu. Select the Referential Integrity plug-in. Set the scope in the Entry Scope , Exclude Entry Scope , and Container Scope fields. Click Save Config . | [
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin referential-integrity show nsslapd-pluginEntryScope: DN nsslapd-pluginExcludeEntryScope: DN nsslapd-pluginContainerScope: DN",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin referential-integrity set --entry-scope=\" DN \"",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin referential-integrity set --exclude-entry-scope=\" DN \"",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin referential-integrity set --container-scope=\" DN \"",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin referential-integrity set --entry-scope=delete",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin referential-integrity set --exclude-entry-scope=delete",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin referential-integrity set --container-scope=delete",
"dsctl instance_name restart"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/Maintaining_Referential_Integrity-Configuring_Scope_For_Referential_Integrity |
Chapter 142. Hazelcast Topic Component | Chapter 142. Hazelcast Topic Component Available as of Camel version 2.15 The Hazelcast Topic component is one of Camel Hazelcast Components which allows you to access Hazelcast distributed topic. 142.1. Options The Hazelcast Topic component supports 3 options, which are listed below. Name Description Default Type hazelcastInstance (advanced) The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance. HazelcastInstance hazelcastMode (advanced) The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default. node String resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Hazelcast Topic endpoint is configured using URI syntax: with the following path and query parameters: 142.1.1. Path Parameters (1 parameters): Name Description Default Type cacheName Required The name of the cache String 142.1.2. Query Parameters (16 parameters): Name Description Default Type defaultOperation (common) To specify a default operation to use, if no operation header has been provided. HazelcastOperation hazelcastInstance (common) The hazelcast instance reference which can be used for hazelcast endpoint. HazelcastInstance hazelcastInstanceName (common) The hazelcast instance reference name which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance. String reliable (common) Define if the endpoint will use a reliable Topic struct or not. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean pollingTimeout (consumer) Define the polling timeout of the Queue consumer in Poll mode 10000 long poolSize (consumer) Define the Pool size for Queue Consumer Executor 1 int queueConsumerMode (consumer) Define the Queue Consumer mode: Listen or Poll Listen HazelcastQueueConsumer Mode exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean concurrentConsumers (seda) To use concurrent consumers polling from the SEDA queue. 1 int onErrorDelay (seda) Milliseconds before consumer continues polling after an error has occurred. 1000 int pollTimeout (seda) The timeout used when consuming from the SEDA queue. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown. 1000 int transacted (seda) If set to true then the consumer runs in transaction mode, where the messages in the seda queue will only be removed if the transaction commits, which happens when the processing is complete. false boolean transferExchange (seda) If set to true the whole Exchange will be transfered. If header or body contains not serializable objects, they will be skipped. false boolean 142.2. Spring Boot Auto-Configuration The component supports 8 options, which are listed below. Name Description Default Type camel.component.hazelcast-topic.customizer.hazelcast-instance.enabled Enable or disable the cache-manager customizer. true Boolean camel.component.hazelcast-topic.customizer.hazelcast-instance.enabled Enable or disable the cache-manager customizer. true Boolean camel.component.hazelcast-topic.customizer.hazelcast-instance.override Configure if the cache manager eventually set on the component should be overridden by the customizer. false Boolean camel.component.hazelcast-topic.customizer.hazelcast-instance.override Configure if the cache manager eventually set on the component should be overridden by the customizer. false Boolean camel.component.hazelcast-topic.enabled Enable hazelcast-topic component true Boolean camel.component.hazelcast-topic.hazelcast-instance The hazelcast instance reference which can be used for hazelcast endpoint. If you don't specify the instance reference, camel use the default hazelcast instance from the camel-hazelcast instance. The option is a com.hazelcast.core.HazelcastInstance type. String camel.component.hazelcast-topic.hazelcast-mode The hazelcast mode reference which kind of instance should be used. If you don't specify the mode, then the node mode will be the default. node String camel.component.hazelcast-topic.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 142.3. Topic producer - to("hazelcast-topic:foo") The topic producer provides only one operation (publish). 142.3.1. Sample for publish : from("direct:add") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.PUBLISH)) .toF("hazelcast-%sbar", HazelcastConstants.PUBLISH_OPERATION); 142.4. Topic consumer - from("hazelcast-topic:foo") The topic consumer provides only one operation (received). This component is supposed to support multiple consumption as it's expected when it comes to topics so you are free to have as much consumers as you need on the same hazelcast topic. fromF("hazelcast-%sfoo", HazelcastConstants.TOPIC_PREFIX) .choice() .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.RECEIVED)) .log("...message received") .otherwise() .log("...this should never have happened") | [
"hazelcast-topic:cacheName",
"from(\"direct:add\") .setHeader(HazelcastConstants.OPERATION, constant(HazelcastOperation.PUBLISH)) .toF(\"hazelcast-%sbar\", HazelcastConstants.PUBLISH_OPERATION);",
"fromF(\"hazelcast-%sfoo\", HazelcastConstants.TOPIC_PREFIX) .choice() .when(header(HazelcastConstants.LISTENER_ACTION).isEqualTo(HazelcastConstants.RECEIVED)) .log(\"...message received\") .otherwise() .log(\"...this should never have happened\")"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/hazelcast-topic-component |
5.8. Using Zones to Manage Incoming Traffic Depending on Source | 5.8. Using Zones to Manage Incoming Traffic Depending on Source You can use zones to manage incoming traffic based on its source. That enables you to sort incoming traffic and route it through different zones to allow or disallow services that can be reached by that traffic. If you add a source to a zone, the zone becomes active and any incoming traffic from that source will be directed through it. You can specify different settings for each zone, which is applied to the traffic from the given sources accordingly. You can use more zones even if you only have one network interface. 5.8.1. Adding a Source To route incoming traffic into a specific source, add the source to that zone. The source can be an IP address or an IP mask in the Classless Inter-domain Routing (CIDR) notation. To set the source in the current zone: To set the source IP address for a specific zone: The following procedure allows all incoming traffic from 192.168.2.15 in the trusted zone: List all available zones: Add the source IP to the trusted zone in the permanent mode: Make the new settings persistent: 5.8.2. Removing a Source Removing a source from the zone cuts off the traffic coming from it. List allowed sources for the required zone: Remove the source from the zone permanently: Make the new settings persistent: 5.8.3. Adding a Source Port To enable sorting the traffic based on a port of origin, specify a source port using the --add-source-port option. You can also combine this with the --add-source option to limit the traffic to a certain IP address or IP range. To add a source port: 5.8.4. Removing a Source Port By removing a source port you disable sorting the traffic based on a port of origin. To remove a source port: 5.8.5. Using Zones and Sources to Allow a Service for Only a Specific Domain To allow traffic from a specific network to use a service on a machine, use zones and source. The following procedure allows only HTTP traffic from the 192.0.2.0/24 network while any other traffic is blocked. Warning When you configure this scenario, use a zone that has the default target. Using a zone that has the target set to ACCEPT is a security risk, because for traffic from 192.0.2.0/24 , all network connections would be accepted. List all available zones: Add the IP range to the internal zone to route the traffic originating from the source through the zone: Add the http service to the internal zone: Make the new settings persistent: Check that the internal zone is active and that the service is allowed in it: 5.8.6. Configuring Traffic Accepted by a Zone Based on Protocol You can allow incoming traffic to be accepted by a zone based on the protocol. All traffic using the specified protocol is accepted by a zone, in which you can apply further rules and filtering. Adding a Protocol to a Zone By adding a protocol to a certain zone, you allow all traffic with this protocol to be accepted by this zone. To add a protocol to a zone: Note To receive multicast traffic, use the igmp value with the --add-protocol option. Removing a Protocol from a Zone By removing a protocol from a certain zone, you stop accepting all traffic based on this protocol by the zone. To remove a protocol from a zone: | [
"~]# firewall-cmd --add-source=<source>",
"~]# firewall-cmd --zone= zone-name --add-source=<source>",
"~]# firewall-cmd --get-zones",
"~]# firewall-cmd --zone=trusted --add-source= 192.168.2.15",
"~]# firewall-cmd --runtime-to-permanent",
"~]# firewall-cmd --zone= zone-name --list-sources",
"~]# firewall-cmd --zone= zone-name --remove-source=<source>",
"~]# firewall-cmd --runtime-to-permanent",
"~]# firewall-cmd --zone= zone-name --add-source-port=<port-name>/<tcp|udp|sctp|dccp>",
"~]# firewall-cmd --zone= zone-name --remove-source-port=<port-name>/<tcp|udp|sctp|dccp>",
"~]# firewall-cmd --get-zones block dmz drop external home internal public trusted work",
"~]# firewall-cmd --zone=internal --add-source=192.0.2.0/24",
"~]# firewall-cmd --zone=internal --add-service=http",
"~]# firewall-cmd --runtime-to-permanent",
"~]# firewall-cmd --zone=internal --list-all internal (active) target: default icmp-block-inversion: no interfaces: sources: 192.0.2.0/24 services: dhcpv6-client mdns samba-client ssh http",
"~]# firewall-cmd --zone= zone-name --add-protocol= port-name / tcp|udp|sctp|dccp|igmp",
"~]# firewall-cmd --zone= zone-name --remove-protocol= port-name / tcp|udp|sctp|dccp|igmp"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-Using_Zones_to_Manage_Incoming_Traffic_Depending_on_Source |
Chapter 11. SecretList [image.openshift.io/v1] | Chapter 11. SecretList [image.openshift.io/v1] Description SecretList is a list of Secret. Type object Required items 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Secret) Items is a list of secret objects. More info: https://kubernetes.io/docs/concepts/configuration/secret kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 11.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams/{name}/secrets GET : read secrets of the specified ImageStream 11.2.1. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams/{name}/secrets Table 11.1. Global path parameters Parameter Type Description name string name of the SecretList namespace string object name and auth scope, such as for teams and projects Table 11.2. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description read secrets of the specified ImageStream Table 11.3. HTTP responses HTTP code Reponse body 200 - OK SecretList schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/image_apis/secretlist-image-openshift-io-v1 |
Chapter 14. Enabling the Red Hat Virtualization Manager Repositories | Chapter 14. Enabling the Red Hat Virtualization Manager Repositories Register the system with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: Use the pool ID to attach the subscription to the system: Note To view currently attached subscriptions: To list all enabled repositories: Configure the repositories: Enable the pki-deps module. Enable version 12 of the postgresql module. Synchronize installed packages to update them to the latest available versions. Additional resources For information on modules and module streams, see the following sections in Installing , managing , and removing user-space components Module streams Selecting a stream before installation of packages Resetting module streams Switching to a later stream | [
"subscription-manager register",
"subscription-manager list --available",
"subscription-manager attach --pool=pool_id",
"subscription-manager list --consumed",
"yum repolist",
"subscription-manager repos --disable='*' --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms",
"yum module -y enable pki-deps",
"yum module -y enable postgresql:12",
"yum distro-sync"
] | https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/post-deploy-rhvm-subscribe |
Chapter 5. Pipelines CLI (tkn) | Chapter 5. Pipelines CLI (tkn) 5.1. Installing tkn Use the tkn CLI to manage Red Hat OpenShift Pipelines from a terminal. The following section describes how to install tkn on different platforms. You can also find the URL to the latest binaries from the OpenShift Container Platform web console by clicking the ? icon in the upper-right corner and selecting Command Line Tools . 5.1.1. Installing Red Hat OpenShift Pipelines CLI (tkn) on Linux For Linux distributions, you can download the CLI directly as a tar.gz archive. Procedure Download the relevant CLI. Linux (x86_64, amd64) Linux on IBM Z and LinuxONE (s390x) Linux on IBM Power Systems (ppc64le) Unpack the archive: USD tar xvzf <file> Place the tkn binary in a directory that is on your PATH . To check your PATH , run: USD echo USDPATH 5.1.2. Installing Red Hat OpenShift Pipelines CLI (tkn) on Linux using an RPM For Red Hat Enterprise Linux (RHEL) version 8, you can install the Red Hat OpenShift Pipelines CLI ( tkn ) as an RPM. Prerequisites You have an active OpenShift Container Platform subscription on your Red Hat account. You have root or sudo privileges on your local system. Procedure Register with Red Hat Subscription Manager: # subscription-manager register Pull the latest subscription data: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*pipelines*' In the output for the command, find the pool ID for your OpenShift Container Platform subscription and attach the subscription to the registered system: # subscription-manager attach --pool=<pool_id> Enable the repositories required by Red Hat OpenShift Pipelines: Linux (x86_64, amd64) # subscription-manager repos --enable="pipelines-1.6-for-rhel-8-x86_64-rpms" Linux on IBM Z and LinuxONE (s390x) # subscription-manager repos --enable="pipelines-1.6-for-rhel-8-s390x-rpms" Linux on IBM Power Systems (ppc64le) # subscription-manager repos --enable="pipelines-1.6-for-rhel-8-ppc64le-rpms" Install the openshift-pipelines-client package: # yum install openshift-pipelines-client After you install the CLI, it is available using the tkn command: USD tkn version 5.1.3. Installing Red Hat OpenShift Pipelines CLI (tkn) on Windows For Windows, the tkn CLI is provided as a zip archive. Procedure Download the CLI . Unzip the archive with a ZIP program. Add the location of your tkn.exe file to your PATH environment variable. To check your PATH , open the command prompt and run the command: C:\> path 5.1.4. Installing Red Hat OpenShift Pipelines CLI (tkn) on macOS For macOS, the tkn CLI is provided as a tar.gz archive. Procedure Download the CLI . Unpack and unzip the archive. Move the tkn binary to a directory on your PATH. To check your PATH , open a terminal window and run: USD echo USDPATH 5.2. Configuring the OpenShift Pipelines tkn CLI Configure the Red Hat OpenShift Pipelines tkn CLI to enable tab completion. 5.2.1. Enabling tab completion After you install the tkn CLI, you can enable tab completion to automatically complete tkn commands or suggest options when you press Tab. Prerequisites You must have the tkn CLI tool installed. You must have bash-completion installed on your local system. Procedure The following procedure enables tab completion for Bash. Save the Bash completion code to a file: USD tkn completion bash > tkn_bash_completion Copy the file to /etc/bash_completion.d/ : USD sudo cp tkn_bash_completion /etc/bash_completion.d/ Alternatively, you can save the file to a local directory and source it from your .bashrc file instead. Tab completion is enabled when you open a new terminal. 5.3. OpenShift Pipelines tkn reference This section lists the basic tkn CLI commands. 5.3.1. Basic syntax tkn [command or options] [arguments... ] 5.3.2. Global options --help, -h 5.3.3. Utility commands 5.3.3.1. tkn Parent command for tkn CLI. Example: Display all options USD tkn 5.3.3.2. completion [shell] Print shell completion code which must be evaluated to provide interactive completion. Supported shells are bash and zsh . Example: Completion code for bash shell USD tkn completion bash 5.3.3.3. version Print version information of the tkn CLI. Example: Check the tkn version USD tkn version 5.3.4. Pipelines management commands 5.3.4.1. pipeline Manage pipelines. Example: Display help USD tkn pipeline --help 5.3.4.2. pipeline delete Delete a pipeline. Example: Delete the mypipeline pipeline from a namespace USD tkn pipeline delete mypipeline -n myspace 5.3.4.3. pipeline describe Describe a pipeline. Example: Describe the mypipeline pipeline USD tkn pipeline describe mypipeline 5.3.4.4. pipeline list Display a list of pipelines. Example: Display a list of pipelines USD tkn pipeline list 5.3.4.5. pipeline logs Display the logs for a specific pipeline. Example: Stream the live logs for the mypipeline pipeline USD tkn pipeline logs -f mypipeline 5.3.4.6. pipeline start Start a pipeline. Example: Start the mypipeline pipeline USD tkn pipeline start mypipeline 5.3.5. Pipeline run commands 5.3.5.1. pipelinerun Manage pipeline runs. Example: Display help USD tkn pipelinerun -h 5.3.5.2. pipelinerun cancel Cancel a pipeline run. Example: Cancel the mypipelinerun pipeline run from a namespace USD tkn pipelinerun cancel mypipelinerun -n myspace 5.3.5.3. pipelinerun delete Delete a pipeline run. Example: Delete pipeline runs from a namespace USD tkn pipelinerun delete mypipelinerun1 mypipelinerun2 -n myspace Example: Delete all pipeline runs from a namespace, except the five most recently executed pipeline runs USD tkn pipelinerun delete -n myspace --keep 5 1 1 Replace 5 with the number of most recently executed pipeline runs you want to retain. Example: Delete all pipelines USD tkn pipelinerun delete --all Note Starting with Red Hat OpenShift Pipelines 1.6, the tkn pipelinerun delete --all command does not delete any resources that are in the running state. 5.3.5.4. pipelinerun describe Describe a pipeline run. Example: Describe the mypipelinerun pipeline run in a namespace USD tkn pipelinerun describe mypipelinerun -n myspace 5.3.5.5. pipelinerun list List pipeline runs. Example: Display a list of pipeline runs in a namespace USD tkn pipelinerun list -n myspace 5.3.5.6. pipelinerun logs Display the logs of a pipeline run. Example: Display the logs of the mypipelinerun pipeline run with all tasks and steps in a namespace USD tkn pipelinerun logs mypipelinerun -a -n myspace 5.3.6. Task management commands 5.3.6.1. task Manage tasks. Example: Display help USD tkn task -h 5.3.6.2. task delete Delete a task. Example: Delete mytask1 and mytask2 tasks from a namespace USD tkn task delete mytask1 mytask2 -n myspace 5.3.6.3. task describe Describe a task. Example: Describe the mytask task in a namespace USD tkn task describe mytask -n myspace 5.3.6.4. task list List tasks. Example: List all the tasks in a namespace USD tkn task list -n myspace 5.3.6.5. task logs Display task logs. Example: Display logs for the mytaskrun task run of the mytask task USD tkn task logs mytask mytaskrun -n myspace 5.3.6.6. task start Start a task. Example: Start the mytask task in a namespace USD tkn task start mytask -s <ServiceAccountName> -n myspace 5.3.7. Task run commands 5.3.7.1. taskrun Manage task runs. Example: Display help USD tkn taskrun -h 5.3.7.2. taskrun cancel Cancel a task run. Example: Cancel the mytaskrun task run from a namespace USD tkn taskrun cancel mytaskrun -n myspace 5.3.7.3. taskrun delete Delete a TaskRun. Example: Delete the mytaskrun1 and mytaskrun2 task runs from a namespace USD tkn taskrun delete mytaskrun1 mytaskrun2 -n myspace Example: Delete all but the five most recently executed task runs from a namespace USD tkn taskrun delete -n myspace --keep 5 1 1 Replace 5 with the number of most recently executed task runs you want to retain. 5.3.7.4. taskrun describe Describe a task run. Example: Describe the mytaskrun task run in a namespace USD tkn taskrun describe mytaskrun -n myspace 5.3.7.5. taskrun list List task runs. Example: List all the task runs in a namespace USD tkn taskrun list -n myspace 5.3.7.6. taskrun logs Display task run logs. Example: Display live logs for the mytaskrun task run in a namespace USD tkn taskrun logs -f mytaskrun -n myspace 5.3.8. Condition management commands 5.3.8.1. condition Manage Conditions. Example: Display help USD tkn condition --help 5.3.8.2. condition delete Delete a Condition. Example: Delete the mycondition1 Condition from a namespace USD tkn condition delete mycondition1 -n myspace 5.3.8.3. condition describe Describe a Condition. Example: Describe the mycondition1 Condition in a namespace USD tkn condition describe mycondition1 -n myspace 5.3.8.4. condition list List Conditions. Example: List Conditions in a namespace USD tkn condition list -n myspace 5.3.9. Pipeline Resource management commands 5.3.9.1. resource Manage Pipeline Resources. Example: Display help USD tkn resource -h 5.3.9.2. resource create Create a Pipeline Resource. Example: Create a Pipeline Resource in a namespace USD tkn resource create -n myspace This is an interactive command that asks for input on the name of the Resource, type of the Resource, and the values based on the type of the Resource. 5.3.9.3. resource delete Delete a Pipeline Resource. Example: Delete the myresource Pipeline Resource from a namespace USD tkn resource delete myresource -n myspace 5.3.9.4. resource describe Describe a Pipeline Resource. Example: Describe the myresource Pipeline Resource USD tkn resource describe myresource -n myspace 5.3.9.5. resource list List Pipeline Resources. Example: List all Pipeline Resources in a namespace USD tkn resource list -n myspace 5.3.10. ClusterTask management commands 5.3.10.1. clustertask Manage ClusterTasks. Example: Display help USD tkn clustertask --help 5.3.10.2. clustertask delete Delete a ClusterTask resource in a cluster. Example: Delete mytask1 and mytask2 ClusterTasks USD tkn clustertask delete mytask1 mytask2 5.3.10.3. clustertask describe Describe a ClusterTask. Example: Describe the mytask ClusterTask USD tkn clustertask describe mytask1 5.3.10.4. clustertask list List ClusterTasks. Example: List ClusterTasks USD tkn clustertask list 5.3.10.5. clustertask start Start ClusterTasks. Example: Start the mytask ClusterTask USD tkn clustertask start mytask 5.3.11. Trigger management commands 5.3.11.1. eventlistener Manage EventListeners. Example: Display help USD tkn eventlistener -h 5.3.11.2. eventlistener delete Delete an EventListener. Example: Delete mylistener1 and mylistener2 EventListeners in a namespace USD tkn eventlistener delete mylistener1 mylistener2 -n myspace 5.3.11.3. eventlistener describe Describe an EventListener. Example: Describe the mylistener EventListener in a namespace USD tkn eventlistener describe mylistener -n myspace 5.3.11.4. eventlistener list List EventListeners. Example: List all the EventListeners in a namespace USD tkn eventlistener list -n myspace 5.3.11.5. eventlistener logs Display logs of an EventListener. Example: Display the logs of the mylistener EventListener in a namespace USD tkn eventlistener logs mylistener -n myspace 5.3.11.6. triggerbinding Manage TriggerBindings. Example: Display TriggerBindings help USD tkn triggerbinding -h 5.3.11.7. triggerbinding delete Delete a TriggerBinding. Example: Delete mybinding1 and mybinding2 TriggerBindings in a namespace USD tkn triggerbinding delete mybinding1 mybinding2 -n myspace 5.3.11.8. triggerbinding describe Describe a TriggerBinding. Example: Describe the mybinding TriggerBinding in a namespace USD tkn triggerbinding describe mybinding -n myspace 5.3.11.9. triggerbinding list List TriggerBindings. Example: List all the TriggerBindings in a namespace USD tkn triggerbinding list -n myspace 5.3.11.10. triggertemplate Manage TriggerTemplates. Example: Display TriggerTemplate help USD tkn triggertemplate -h 5.3.11.11. triggertemplate delete Delete a TriggerTemplate. Example: Delete mytemplate1 and mytemplate2 TriggerTemplates in a namespace USD tkn triggertemplate delete mytemplate1 mytemplate2 -n `myspace` 5.3.11.12. triggertemplate describe Describe a TriggerTemplate. Example: Describe the mytemplate TriggerTemplate in a namespace USD tkn triggertemplate describe mytemplate -n `myspace` 5.3.11.13. triggertemplate list List TriggerTemplates. Example: List all the TriggerTemplates in a namespace USD tkn triggertemplate list -n myspace 5.3.11.14. clustertriggerbinding Manage ClusterTriggerBindings. Example: Display ClusterTriggerBindings help USD tkn clustertriggerbinding -h 5.3.11.15. clustertriggerbinding delete Delete a ClusterTriggerBinding. Example: Delete myclusterbinding1 and myclusterbinding2 ClusterTriggerBindings USD tkn clustertriggerbinding delete myclusterbinding1 myclusterbinding2 5.3.11.16. clustertriggerbinding describe Describe a ClusterTriggerBinding. Example: Describe the myclusterbinding ClusterTriggerBinding USD tkn clustertriggerbinding describe myclusterbinding 5.3.11.17. clustertriggerbinding list List ClusterTriggerBindings. Example: List all ClusterTriggerBindings USD tkn clustertriggerbinding list 5.3.12. Hub interaction commands Interact with Tekton Hub for resources such as tasks and pipelines. 5.3.12.1. hub Interact with hub. Example: Display help USD tkn hub -h Example: Interact with a hub API server USD tkn hub --api-server https://api.hub.tekton.dev Note For each example, to get the corresponding sub-commands and flags, run tkn hub <command> --help . 5.3.12.2. hub downgrade Downgrade an installed resource. Example: Downgrade the mytask task in the mynamespace namespace to it's older version USD tkn hub downgrade task mytask --to version -n mynamespace 5.3.12.3. hub get Get a resource manifest by its name, kind, catalog, and version. Example: Get the manifest for a specific version of the myresource pipeline or task from the tekton catalog USD tkn hub get [pipeline | task] myresource --from tekton --version version 5.3.12.4. hub info Display information about a resource by its name, kind, catalog, and version. Example: Display information about a specific version of the mytask task from the tekton catalog USD tkn hub info task mytask --from tekton --version version 5.3.12.5. hub install Install a resource from a catalog by its kind, name, and version. Example: Install a specific version of the mytask task from the tekton catalog in the mynamespace namespace USD tkn hub install task mytask --from tekton --version version -n mynamespace 5.3.12.6. hub reinstall Reinstall a resource by its kind and name. Example: Reinstall a specific version of the mytask task from the tekton catalog in the mynamespace namespace USD tkn hub reinstall task mytask --from tekton --version version -n mynamespace 5.3.12.7. hub search Search a resource by a combination of name, kind, and tags. Example: Search a resource with a tag cli USD tkn hub search --tags cli 5.3.12.8. hub upgrade Upgrade an installed resource. Example: Upgrade the installed mytask task in the mynamespace namespace to a new version USD tkn hub upgrade task mytask --to version -n mynamespace | [
"tar xvzf <file>",
"echo USDPATH",
"subscription-manager register",
"subscription-manager refresh",
"subscription-manager list --available --matches '*pipelines*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --enable=\"pipelines-1.6-for-rhel-8-x86_64-rpms\"",
"subscription-manager repos --enable=\"pipelines-1.6-for-rhel-8-s390x-rpms\"",
"subscription-manager repos --enable=\"pipelines-1.6-for-rhel-8-ppc64le-rpms\"",
"yum install openshift-pipelines-client",
"tkn version",
"C:\\> path",
"echo USDPATH",
"tkn completion bash > tkn_bash_completion",
"sudo cp tkn_bash_completion /etc/bash_completion.d/",
"tkn",
"tkn completion bash",
"tkn version",
"tkn pipeline --help",
"tkn pipeline delete mypipeline -n myspace",
"tkn pipeline describe mypipeline",
"tkn pipeline list",
"tkn pipeline logs -f mypipeline",
"tkn pipeline start mypipeline",
"tkn pipelinerun -h",
"tkn pipelinerun cancel mypipelinerun -n myspace",
"tkn pipelinerun delete mypipelinerun1 mypipelinerun2 -n myspace",
"tkn pipelinerun delete -n myspace --keep 5 1",
"tkn pipelinerun delete --all",
"tkn pipelinerun describe mypipelinerun -n myspace",
"tkn pipelinerun list -n myspace",
"tkn pipelinerun logs mypipelinerun -a -n myspace",
"tkn task -h",
"tkn task delete mytask1 mytask2 -n myspace",
"tkn task describe mytask -n myspace",
"tkn task list -n myspace",
"tkn task logs mytask mytaskrun -n myspace",
"tkn task start mytask -s <ServiceAccountName> -n myspace",
"tkn taskrun -h",
"tkn taskrun cancel mytaskrun -n myspace",
"tkn taskrun delete mytaskrun1 mytaskrun2 -n myspace",
"tkn taskrun delete -n myspace --keep 5 1",
"tkn taskrun describe mytaskrun -n myspace",
"tkn taskrun list -n myspace",
"tkn taskrun logs -f mytaskrun -n myspace",
"tkn condition --help",
"tkn condition delete mycondition1 -n myspace",
"tkn condition describe mycondition1 -n myspace",
"tkn condition list -n myspace",
"tkn resource -h",
"tkn resource create -n myspace",
"tkn resource delete myresource -n myspace",
"tkn resource describe myresource -n myspace",
"tkn resource list -n myspace",
"tkn clustertask --help",
"tkn clustertask delete mytask1 mytask2",
"tkn clustertask describe mytask1",
"tkn clustertask list",
"tkn clustertask start mytask",
"tkn eventlistener -h",
"tkn eventlistener delete mylistener1 mylistener2 -n myspace",
"tkn eventlistener describe mylistener -n myspace",
"tkn eventlistener list -n myspace",
"tkn eventlistener logs mylistener -n myspace",
"tkn triggerbinding -h",
"tkn triggerbinding delete mybinding1 mybinding2 -n myspace",
"tkn triggerbinding describe mybinding -n myspace",
"tkn triggerbinding list -n myspace",
"tkn triggertemplate -h",
"tkn triggertemplate delete mytemplate1 mytemplate2 -n `myspace`",
"tkn triggertemplate describe mytemplate -n `myspace`",
"tkn triggertemplate list -n myspace",
"tkn clustertriggerbinding -h",
"tkn clustertriggerbinding delete myclusterbinding1 myclusterbinding2",
"tkn clustertriggerbinding describe myclusterbinding",
"tkn clustertriggerbinding list",
"tkn hub -h",
"tkn hub --api-server https://api.hub.tekton.dev",
"tkn hub downgrade task mytask --to version -n mynamespace",
"tkn hub get [pipeline | task] myresource --from tekton --version version",
"tkn hub info task mytask --from tekton --version version",
"tkn hub install task mytask --from tekton --version version -n mynamespace",
"tkn hub reinstall task mytask --from tekton --version version -n mynamespace",
"tkn hub search --tags cli",
"tkn hub upgrade task mytask --to version -n mynamespace"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/cli_tools/pipelines-cli-tkn |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/security_hardening/proc_providing-feedback-on-red-hat-documentation_security-hardening |
Chapter 14. Understanding and managing pod security admission | Chapter 14. Understanding and managing pod security admission Pod security admission is an implementation of the Kubernetes pod security standards . Use pod security admission to restrict the behavior of pods. 14.1. About pod security admission OpenShift Dedicated includes Kubernetes pod security admission . Pods that do not comply with the pod security admission defined globally or at the namespace level are not admitted to the cluster and cannot run. Globally, the privileged profile is enforced, and the restricted profile is used for warnings and audits. You can also configure the pod security admission settings at the namespace level. Important Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: default , kube-public , kube-system , openshift , openshift-infra , openshift-node , and other system-created projects that have the openshift.io/run-level label set to 0 or 1 . Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects. 14.1.1. Pod security admission modes You can configure the following pod security admission modes for a namespace: Table 14.1. Pod security admission modes Mode Label Description enforce pod-security.kubernetes.io/enforce Rejects a pod from admission if it does not comply with the set profile audit pod-security.kubernetes.io/audit Logs audit events if a pod does not comply with the set profile warn pod-security.kubernetes.io/warn Displays warnings if a pod does not comply with the set profile 14.1.2. Pod security admission profiles You can set each of the pod security admission modes to one of the following profiles: Table 14.2. Pod security admission profiles Profile Description privileged Least restrictive policy; allows for known privilege escalation baseline Minimally restrictive policy; prevents known privilege escalations restricted Most restrictive policy; follows current pod hardening best practices 14.1.3. Privileged namespaces The following system namespaces are always set to the privileged pod security admission profile: default kube-public kube-system You cannot change the pod security profile for these privileged namespaces. 14.1.4. Pod security admission and security context constraints Pod security admission standards and security context constraints are reconciled and enforced by two independent controllers. The two controllers work independently using the following processes to enforce security policies: The security context constraint controller may mutate some security context fields per the pod's assigned SCC. For example, if the seccomp profile is empty or not set and if the pod's assigned SCC enforces seccompProfiles field to be runtime/default , the controller sets the default type to RuntimeDefault . The security context constraint controller validates the pod's security context against the matching SCC. The pod security admission controller validates the pod's security context against the pod security standard assigned to the namespace. 14.2. About pod security admission synchronization In addition to the global pod security admission control configuration, a controller applies pod security admission control warn and audit labels to namespaces according to the SCC permissions of the service accounts that are in a given namespace. The controller examines ServiceAccount object permissions to use security context constraints in each namespace. Security context constraints (SCCs) are mapped to pod security profiles based on their field values; the controller uses these translated profiles. Pod security admission warn and audit labels are set to the most privileged pod security profile in the namespace to prevent displaying warnings and logging audit events when pods are created. Namespace labeling is based on consideration of namespace-local service account privileges. Applying pods directly might use the SCC privileges of the user who runs the pod. However, user privileges are not considered during automatic labeling. 14.2.1. Pod security admission synchronization namespace exclusions Pod security admission synchronization is permanently disabled on system-created namespaces and openshift-* prefixed namespaces. Namespaces that are defined as part of the cluster payload have pod security admission synchronization disabled permanently. The following namespaces are permanently disabled: default kube-node-lease kube-system kube-public openshift All system-created namespaces that are prefixed with openshift- 14.3. Controlling pod security admission synchronization You can enable or disable automatic pod security admission synchronization for most namespaces. Important You cannot enable pod security admission synchronization on system-created namespaces. For more information, see Pod security admission synchronization namespace exclusions . Procedure For each namespace that you want to configure, set a value for the security.openshift.io/scc.podSecurityLabelSync label: To disable pod security admission label synchronization in a namespace, set the value of the security.openshift.io/scc.podSecurityLabelSync label to false . Run the following command: USD oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=false To enable pod security admission label synchronization in a namespace, set the value of the security.openshift.io/scc.podSecurityLabelSync label to true . Run the following command: USD oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=true Note Use the --overwrite flag to overwrite the value if this label is already set on the namespace. Additional resources Pod security admission synchronization namespace exclusions 14.4. Configuring pod security admission for a namespace You can configure the pod security admission settings at the namespace level. For each of the pod security admission modes on the namespace, you can set which pod security admission profile to use. Procedure For each pod security admission mode that you want to set on a namespace, run the following command: USD oc label namespace <namespace> \ 1 pod-security.kubernetes.io/<mode>=<profile> \ 2 --overwrite 1 Set <namespace> to the namespace to configure. 2 Set <mode> to enforce , warn , or audit . Set <profile> to restricted , baseline , or privileged . 14.5. About pod security admission alerts A PodSecurityViolation alert is triggered when the Kubernetes API server reports that there is a pod denial on the audit level of the pod security admission controller. This alert persists for one day. View the Kubernetes API server audit logs to investigate alerts that were triggered. As an example, a workload is likely to fail admission if global enforcement is set to the restricted pod security level. For assistance in identifying pod security admission violation audit events, see Audit annotations in the Kubernetes documentation. 14.6. Additional resources Viewing audit logs Managing security context constraints | [
"oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=false",
"oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=true",
"oc label namespace <namespace> \\ 1 pod-security.kubernetes.io/<mode>=<profile> \\ 2 --overwrite"
] | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/authentication_and_authorization/understanding-and-managing-pod-security-admission |
Chapter 5. Downloading reports | Chapter 5. Downloading reports After you run a scan, you can download the reports for that scan to view the data that was gathered and processed during that scan. Learn more To learn more about downloading reports, see the following information: Downloading reports 5.1. Downloading reports After you run a scan, you can download the reports for that scan to view the data that was gathered and processed during that scan. Reports for a scan are available in two formats, a comma-separated variable (CSV) format and a JavaScript Object Notation (JSON) format. They are also available in two content types, raw output from the scan as a details report and processed content as a deployments report. Note A third type of report is available, the insights report, but this report can be generated only through the Discovery command line interface. Downloading the insights report provides a .tar.gz file that you can transfer to the Hybrid Cloud Console at cloud.redhat.com. Transferring this file allows the report data to be used in the Red Hat Insights inventory service and in the subscriptions service. Learn more To learn more about merging and downloading reports, see the following information: Downloading reports To learn more about how reports are created, see the following information. This information includes a chronology of the processes of report generation. These processes change the raw facts of a details report into fingerprint data, and then change fingerprint data into the deduplicated and merged data of a deployments report. This information also includes a partial fingerprint example to show the types of data that are used to create a Discovery report. How reports are created A fingerprint example 5.1.1. Downloading reports From the Scans view, you can select one or more reports and download them to view the report data. Prerequisites If you want to download a report for a scan, the most recent scan job for that scan must have completed successfully. Procedure From the Scans view, navigate to the row of the scan for which you want to download the report. Click Download for that scan. Verification steps The downloaded report is saved to the downloads location for your browser as a .tar.gz file, for example, report_id_224_20190702_173309.tar.gz . The filename format is report_id_ ID _ DATE _ TIME .tar.gz , where ID is the unique report ID assigned by the server, DATE is the date in yyyymmdd format, and TIME is the time in the hhmmss format, based on the 24-hour system. The date and time data is determined by the interaction of the browser that is running the client with the server APIs. To view the report, uncompress the .tar.gz file into a report_id_ ID directory. The uncompressed report bundle includes four report files: two details reports in CSV and JSON formats, and two deployments reports in CSV and JSON formats. Note While you can view and use the output of these reports for your own internal processes, the Discovery documentation does not provide any information to help you interpret report results. In addition, although Red Hat Support can provide some basic assistance related to the installation and use of Discovery, the support team does not provide any assistance to help you understand the reports. The reports and their format are designed to be used by the Red Hat Subscription Education and Awareness Program (SEAP) team during customer engagements and for other Red Hat internal processes, such as providing data to various Hybrid Cloud Console services. 5.1.2. How reports are created The scan process is used to discover the systems in your IT infrastructure, to inspect and gather information about the nature and contents of those systems, and to create a report from the information that it gathers during the inspection of each system. A system is any entity that can be interrogated by the inspection tasks through an SSH connection, vCenter Server data, the Satellite Server API, or the Red Hat OpenShift cluster API. Therefore, a system can be a machine, such as a physical or virtual machine, and it can also be a different type of entity, such as a container or a cluster. 5.1.2.1. Facts and fingerprints During a scan, a collection of facts is gathered for each system that is contained in each source. A fact is a single piece of data about a system, such as the version of the operating system, the number of CPU cores, or a consumed entitlement for a Red Hat product. Facts are processed to create a summarized set of data for each system, data that is known as a fingerprint. A fingerprint is the set of facts that identifies a unique system and its characteristics, including the architecture, operating system, the different products that are installed on that system and their versions, the entitlements that are in use on that system, and so on. Fingerprinting data is generated when you run a scan job, but the data is used to create only one type of report. When you request a details report, you receive the raw facts for that scan without any fingerprinting. When you request a deployments report, you receive the fingerprinting data that includes the results from the deduplication, merging, and post-processing processes. These processes include identifying installed products and versions from the raw facts, finding consumed entitlements, finding and merging duplicate instances of products from different sources, and finding products installed in nondefault locations, among other steps. 5.1.2.2. System deduplication and system merging A single system can be found in multiple sources during a scan. For example, a virtual machine on vCenter Server could be running a Red Hat Enterprise Linux operating system installation that is also managed by Satellite. If you construct a scan that contains each type of source, vcenter, satellite, and network, that single system is reported by all three sources during the scan. Note Currently, you cannot combine an OpenShift or Ansible source with any other type of source in a scan, so deduplication and merging processes do not apply for an OpenShift or Ansible scan. To resolve this issue and build an accurate fingerprint, Discovery feeds unprocessed system facts from the scan into a fingerprint engine. The fingerprint engine matches and merges data for systems that are found in more than one source by using the deduplication and merge processes. The system deduplication process uses specific facts about a system to identify duplicate systems. The process moves through several phases, using these facts to combine duplicate systems in successively broader sets of data: All systems from network sources are combined into a single network system set. Systems are considered to be duplicates if they have the same value for the subscription_manager_id or bios_uuid facts. All systems from vcenter sources are combined into a single vcenter system set. Systems are considered to be duplicates if they have the same value for the vm_uuid fact. All systems from satellite sources are combined into a single satellite system set. Systems are considered to be duplicates if they have the same value for the subscription_manager_id fact. The network system set is merged with the satellite system set to form a single network-satellite system set. Systems are considered to be duplicates if they have the same value for the subscription_manager fact or matching MAC address values in the mac_addresses fact. The network-satellite system set is merged with the vcenter system set to form the complete system set. Systems are considered to be duplicates if they have matching MAC address values in the mac_addresses fact or if the vcenter value for the vm_uuid fact matches the network value for the bios_uuid fact. 5.1.2.2.1. System merging After the deduplication process determines that two systems are duplicates, the step is to perform a merge of those two systems. The merged system has a union of system facts from each source. When a fact that appears in two systems is merged, the merge process uses the following order of precedence to merge that fact, from highest to lowest: network source fact satellite source fact vcenter source fact A system fingerprint contains a metadata dictionary that captures the original source of each fact for that system. 5.1.2.3. System post-processing After deduplication and merging are complete, there is a post-processing phase that creates derived system facts. A derived system fact is a fact that is generated from the evaluation of more than one system fact. The majority of derived system facts are related to product identification data, such as the presence of a specific product and its version. The following example shows how the derived system fact system_creation_date is created. The system_creation_date fact is a derived system fact that contains the real system creation time. The value for this fact is determined by the evaluation of the following facts. The value for each fact is examined in the following order of precedence, with the order of precedence determined by the accuracy of the match to the real system creation time. The highest non-empty value is used to determine the value of the system_creation_date fact. date_machine_id registration_time date_anaconda_log date_filesystem_create date_yum_history 5.1.2.4. Report creation After the processing of the report data is complete, the report creation process builds two reports in two different formats, JavaScript Object Notation (JSON) and comma-separated variable (CSV). The details report for each format contains the raw facts with no processing, and the deployments report for each format contains the output after the raw facts have passed through the fingerprinting, deduplication, merge, and post-processing processes. The report format is designed to be used by the Red Hat Subscription Education and Awareness Program (SEAP) team during customer engagements and for other Red Hat internal processes. Note While you can view and use the output of these reports for your own internal processes, the Discovery documentation does not provide any information to help you interpret report results. In addition, although Red Hat Support can provide some basic assistance related to the installation and use of Discovery, the support team does not provide any assistance to help you understand the reports. The reports and their format are designed to be used by the Red Hat Subscription Education and Awareness Program (SEAP) team during customer engagements and for other Red Hat internal processes, such as providing data to various Hybrid Cloud Console services. 5.1.2.5. A fingerprint example A fingerprint is composed of a set of facts about a single system in addition to facts about products, entitlements, sources, and metadata on that system. The following example shows fingerprint data. A fingerprint for a single system, even with very few Red Hat products installed on it, can be many lines. Therefore, only a partial fingerprint is used in this example. Example The first several lines of a fingerprint show facts about the system, including facts about the operating system and CPUs. In this example, the os_release fact describes the installed operating system and release as Red Hat Enterprise Linux Atomic Host 7.4 . , the fingerprint lists the installed products in the products section. A product has a name, version, presence, and metadata field. In the JBoss EAP section, the presence field shows absent as the value, so the system in this example does not have Red Hat JBoss Enterprise Application Platform installed. The fingerprint also lists the consumed entitlements for that system in the entitlements section. Each entitlement in the list has a name, ID, and metadata that describes the original source of that fact. In the example fingerprint, the system has the Satellite Tools 6.3 entitlement. In addition to the metadata fields that are in the products and entitlements sections, the fingerprint contains a metadata section that is used for system fact metadata. For each system fact, there is a corresponding entry in the metadata section of the fingerprint that identifies the original source of that system fact. In the example, the os_release fact was found in Satellite Server, during the scan of the satellite source. Lastly, the fingerprint lists the sources that contain this system in the sources section. A system can be contained in more than one source. For example, for a scan that includes both a network source and a satellite source, a single system can be found in both parts of the scan. | [
"{ \"os_release\": \"Red Hat Enterprise Linux Atomic Host 7.4\", \"cpu_count\": 4, \"products\": [ { \"name\": \"JBoss EAP\", \"version\": null, \"presence\": \"absent\", \"metadata\": { \"source_id\": 5, \"source_name\": \"S62Source\", \"source_type\": \"satellite\", \"raw_fact_key\": null } } ], \"entitlements\": [ { \"name\": \"Satellite Tools 6.3\", \"entitlement_id\": 54, \"metadata\": { \"source_id\": 5, \"source_name\": \"S62Source\", \"source_type\": \"satellite\", \"raw_fact_key\": \"entitlements\" } } ], \"metadata\": { \"os_release\": { \"source_id\": 5, \"source_name\": \"S62Source\", \"source_type\": \"satellite\", \"raw_fact_key\": \"os_release\" }, \"cpu_count\": { \"source_id\": 4, \"source_name\": \"NetworkSource\", \"source_type\": \"network\", \"raw_fact_key\": \"os_release\" } }, \"sources\": [ { \"id\": 4, \"source_type\": \"network\", \"name\": \"NetworkSource\" }, { \"id\": 5, \"source_type\": \"satellite\", \"name\": \"S62Source\" } ] }"
] | https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/using_red_hat_discovery/assembly-merging-downloading-reports-gui-main |
Chapter 70. Timer Source | Chapter 70. Timer Source Produces periodic events with a custom payload. 70.1. Configuration Options The following table summarizes the configuration options available for the timer-source Kamelet: Property Name Description Type Default Example message * Message The message to generate string "hello world" contentType Content Type The content type of the message being generated string "text/plain" period Period The interval between two events in milliseconds integer 1000 repeatCount Repeat Count Specifies the maximum limit of the number of fires integer Note Fields marked with an asterisk (*) are mandatory. 70.2. Dependencies At runtime, the timer-source Kamelet relies upon the presence of the following dependencies: camel:core camel:timer camel:kamelet 70.3. Usage This section describes how you can use the timer-source . 70.3.1. Knative Source You can use the timer-source Kamelet as a Knative source by binding it to a Knative object. timer-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: timer-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "hello world" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 70.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 70.3.1.2. Procedure for using the cluster CLI Save the timer-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f timer-source-binding.yaml 70.3.1.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind timer-source -p "source.message=hello world" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 70.3.2. Kafka Source You can use the timer-source Kamelet as a Kafka source by binding it to a Kafka topic. timer-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: timer-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "hello world" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 70.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 70.3.2.2. Procedure for using the cluster CLI Save the timer-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f timer-source-binding.yaml 70.3.2.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind timer-source -p "source.message=hello world" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 70.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/timer-source.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: timer-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"hello world\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f timer-source-binding.yaml",
"kamel bind timer-source -p \"source.message=hello world\" channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: timer-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"hello world\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f timer-source-binding.yaml",
"kamel bind timer-source -p \"source.message=hello world\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/timer-source |
Chapter 13. SelfSubjectRulesReview [authorization.k8s.io/v1] | Chapter 13. SelfSubjectRulesReview [authorization.k8s.io/v1] Description SelfSubjectRulesReview enumerates the set of actions the current user can perform within a namespace. The returned list of actions may be incomplete depending on the server's authorization mode, and any errors experienced during the evaluation. SelfSubjectRulesReview should be used by UIs to show/hide actions, or to quickly let an end user reason about their permissions. It should NOT Be used by external systems to drive authorization decisions as this raises confused deputy, cache lifetime/revocation, and correctness concerns. SubjectAccessReview, and LocalAccessReview are the correct way to defer authorization decisions to the API server. Type object Required spec 13.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object SelfSubjectRulesReviewSpec defines the specification for SelfSubjectRulesReview. status object SubjectRulesReviewStatus contains the result of a rules check. This check can be incomplete depending on the set of authorizers the server is configured with and any errors experienced during evaluation. Because authorization rules are additive, if a rule appears in a list it's safe to assume the subject has that permission, even if that list is incomplete. 13.1.1. .spec Description SelfSubjectRulesReviewSpec defines the specification for SelfSubjectRulesReview. Type object Property Type Description namespace string Namespace to evaluate rules for. Required. 13.1.2. .status Description SubjectRulesReviewStatus contains the result of a rules check. This check can be incomplete depending on the set of authorizers the server is configured with and any errors experienced during evaluation. Because authorization rules are additive, if a rule appears in a list it's safe to assume the subject has that permission, even if that list is incomplete. Type object Required resourceRules nonResourceRules incomplete Property Type Description evaluationError string EvaluationError can appear in combination with Rules. It indicates an error occurred during rule evaluation, such as an authorizer that doesn't support rule evaluation, and that ResourceRules and/or NonResourceRules may be incomplete. incomplete boolean Incomplete is true when the rules returned by this call are incomplete. This is most commonly encountered when an authorizer, such as an external authorizer, doesn't support rules evaluation. nonResourceRules array NonResourceRules is the list of actions the subject is allowed to perform on non-resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. nonResourceRules[] object NonResourceRule holds information that describes a rule for the non-resource resourceRules array ResourceRules is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. resourceRules[] object ResourceRule is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. 13.1.3. .status.nonResourceRules Description NonResourceRules is the list of actions the subject is allowed to perform on non-resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. Type array 13.1.4. .status.nonResourceRules[] Description NonResourceRule holds information that describes a rule for the non-resource Type object Required verbs Property Type Description nonResourceURLs array (string) NonResourceURLs is a set of partial urls that a user should have access to. s are allowed, but only as the full, final step in the path. " " means all. verbs array (string) Verb is a list of kubernetes non-resource API verbs, like: get, post, put, delete, patch, head, options. "*" means all. 13.1.5. .status.resourceRules Description ResourceRules is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. Type array 13.1.6. .status.resourceRules[] Description ResourceRule is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. Type object Required verbs Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. "*" means all. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. "*" means all. resources array (string) Resources is a list of resources this rule applies to. " " means all in the specified apiGroups. " /foo" represents the subresource 'foo' for all resources in the specified apiGroups. verbs array (string) Verb is a list of kubernetes resource API verbs, like: get, list, watch, create, update, delete, proxy. "*" means all. 13.2. API endpoints The following API endpoints are available: /apis/authorization.k8s.io/v1/selfsubjectrulesreviews POST : create a SelfSubjectRulesReview 13.2.1. /apis/authorization.k8s.io/v1/selfsubjectrulesreviews Table 13.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a SelfSubjectRulesReview Table 13.2. Body parameters Parameter Type Description body SelfSubjectRulesReview schema Table 13.3. HTTP responses HTTP code Reponse body 200 - OK SelfSubjectRulesReview schema 201 - Created SelfSubjectRulesReview schema 202 - Accepted SelfSubjectRulesReview schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/authorization_apis/selfsubjectrulesreview-authorization-k8s-io-v1 |
Chapter 1. Release notes | Chapter 1. Release notes 1.1. Logging 5.8 Note Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . 1.1.1. Logging 5.8.18 This release includes RHSA-2025:1983 and RHBA-2025:1984 . 1.1.1.1. CVEs CVE-2019-12900 CVE-2020-11023 CVE-2022-49043 CVE-2024-12797 CVE-2024-53104 CVE-2025-1244 Note For detailed information on Red Hat security ratings, review Severity ratings . 1.1.2. Logging 5.8.17 This release includes OpenShift Logging Bug Fix Release 5.8.17 and OpenShift Logging Bug Fix Release 5.8.17 . 1.1.2.1. Enhancements This enhancement adds OTel semantic stream labels to the lokiStack output so that you can query logs by using both ViaQ and OTel stream labels. ( LOG-6582 ) 1.1.2.2. CVEs CVE-2019-12900 CVE-2024-9287 CVE-2024-11168 CVE-2024-12085 CVE-2024-46713 CVE-2024-50208 CVE-2024-50252 CVE-2024-53122 Note For detailed information on Red Hat security ratings, review Severity ratings . 1.1.3. Logging 5.8.16 This release includes RHBA-2024:10989 and RHBA-2024:143685 . 1.1.3.1. Bug fixes Before this update, Loki automatically tried to guess the log level of log messages, which caused confusion because the collector already does this, and Loki and the collector would sometimes come to different results. With this update, the automatic log level discovery in Loki is disabled. LOG-6322 . 1.1.3.2. CVEs CVE-2019-12900 CVE-2021-3903 CVE-2023-38709 CVE-2024-2236 CVE-2024-2511 CVE-2024-3596 CVE-2024-4603 CVE-2024-4741 CVE-2024-5535 CVE-2024-6232 CVE-2024-9287 CVE-2024-10041 CVE-2024-10963 CVE-2024-11168 CVE-2024-24795 CVE-2024-36387 CVE-2024-41009 CVE-2024-42244 CVE-2024-47175 CVE-2024-47875 CVE-2024-50226 CVE-2024-50602 1.1.4. Logging 5.8.15 This release includes RHBA-2024:10052 and RHBA-2024:10053 . 1.1.4.1. Bug fixes Before this update, Loki did not correctly load some configurations, which caused issues when using Alibaba Cloud or IBM Cloud object storage. This update fixes the configuration-loading code in Loki, resolving the issue. ( LOG-6294 ) Before this update, upgrades to version 6.0 failed with errors if a Log File Metric Exporter instance was present. This update fixes the issue, enabling upgrades to proceed smoothly without errors. ( LOG-6328 ) 1.1.4.2. CVEs CVE-2021-47385 CVE-2023-28746 CVE-2023-48161 CVE-2023-52658 CVE-2024-6119 CVE-2024-6232 CVE-2024-21208 CVE-2024-21210 CVE-2024-21217 CVE-2024-21235 CVE-2024-27403 CVE-2024-35989 CVE-2024-36889 CVE-2024-36978 CVE-2024-38556 CVE-2024-39483 CVE-2024-39502 CVE-2024-40959 CVE-2024-42079 CVE-2024-42272 CVE-2024-42284 CVE-2024-3596 CVE-2024-5535 1.1.5. Logging 5.8.14 This release includes OpenShift Logging Bug Fix Release 5.8.14 and OpenShift Logging Bug Fix Release 5.8.14 . 1.1.5.1. Bug fixes Before this update, it was possible to set the .containerLimit.maxRecordsPerSecond parameter in the ClusterLogForwarder custom resource to 0 , which could lead to an exception during Vector's startup. With this update, the configuration is validated before being applied, and any invalid values (less than or equal to zero) are rejected. ( LOG-4671 ) Before this update, the Loki Operator did not automatically add the default namespace label to all its alerting rules, which caused Alertmanager instance for user-defined projects to skip routing such alerts. With this update, all alerting and recording rules have the namespace label and Alertmanager now routes these alerts correctly. ( LOG-6182 ) Before this update, the LokiStack ruler component view was not properly initialized, which caused the invalid field error when the ruler component was disabled. With this update, the issue is resolved by the component view being initialized with an empty value. ( LOG-6184 ) 1.1.5.2. CVEs CVE-2023-37920 CVE-2024-2398 CVE-2024-4032 CVE-2024-6232 CVE-2024-6345 CVE-2024-6923 CVE-2024-30203 CVE-2024-30205 CVE-2024-39331 CVE-2024-45490 CVE-2024-45491 CVE-2024-45492 CVE-2024-6119 CVE-2024-24791 CVE-2024-34155 CVE-2024-34156 CVE-2024-34158 CVE-2024-34397 Note For detailed information on Red Hat security ratings, review Severity ratings . 1.1.6. Logging 5.8.13 This release includes OpenShift Logging Bug Fix Release 5.8.13 and OpenShift Logging Bug Fix Release 5.8.13 . 1.1.6.1. Bug fixes Before this update, the clusterlogforwarder.spec.outputs.http.timeout parameter was not applied to the Fluentd configuration when Fluentd was used as the collector type, causing HTTP timeouts to be misconfigured. With this update, the clusterlogforwarder.spec.outputs.http.timeout parameter is now correctly applied, ensuring that Fluentd honors the specified timeout and handles HTTP connections according to the user's configuration. ( LOG-5210 ) Before this update, the Elasticsearch Operator did not issue an alert to inform users about the upcoming removal, leaving existing installations unsupported without notice. With this update, the Elasticsearch Operator will trigger a continuous alert on OpenShift Container Platform version 4.16 and later, notifying users of its removal from the catalog in November 2025. ( LOG-5966 ) Before this update, the Red Hat OpenShift Logging Operator was unavailable on OpenShift Container Platform version 4.16 and later, preventing Telco customers from completing their certifications for the upcoming Logging 6.0 release. With this update, the Red Hat OpenShift Logging Operator is now available on OpenShift Container Platform versions 4.16 and 4.17, resolving the issue. ( LOG-6103 ) Before this update, the Elasticsearch Operator was not available in the OpenShift Container Platform versions 4.17 and 4.18, preventing the installation of ServiceMesh, Kiali, and Distributed Tracing. With this update, the Elasticsearch Operator properties have been expanded for OpenShift Container Platform versions 4.17 and 4.18, resolving the issue and allowing ServiceMesh, Kiali, and Distributed Tracing operators to install their stacks. ( LOG-6134 ) 1.1.6.2. CVEs CVE-2023-52463 CVE-2023-52801 CVE-2024-6104 CVE-2024-6119 CVE-2024-26629 CVE-2024-26630 CVE-2024-26720 CVE-2024-26886 CVE-2024-26946 CVE-2024-34397 CVE-2024-35791 CVE-2024-35797 CVE-2024-35875 CVE-2024-36000 CVE-2024-36019 CVE-2024-36883 CVE-2024-36979 CVE-2024-38559 CVE-2024-38619 CVE-2024-39331 CVE-2024-40927 CVE-2024-40936 CVE-2024-41040 CVE-2024-41044 CVE-2024-41055 CVE-2024-41073 CVE-2024-41096 CVE-2024-42082 CVE-2024-42096 CVE-2024-42102 CVE-2024-42131 CVE-2024-45490 CVE-2024-45491 CVE-2024-45492 CVE-2024-2398 CVE-2024-4032 CVE-2024-6232 CVE-2024-6345 CVE-2024-6923 CVE-2024-30203 CVE-2024-30205 CVE-2024-39331 CVE-2024-45490 CVE-2024-45491 CVE-2024-45492 Note For detailed information on Red Hat security ratings, review Severity ratings . 1.1.7. Logging 5.8.12 This release includes OpenShift Logging Bug Fix Release 5.8.12 and OpenShift Logging Bug Fix Release 5.8.12 . 1.1.7.1. Bug fixes Before this update, the collector used internal buffering with the drop_newest setting to reduce high memory usage, which caused significant log loss. With this update, the collector goes back to its default behavior, where sink<>.buffer is not customized. ( LOG-6026 ) 1.1.7.2. CVEs CVE-2023-52771 CVE-2023-52880 CVE-2024-2398 CVE-2024-6345 CVE-2024-6923 CVE-2024-26581 CVE-2024-26668 CVE-2024-26810 CVE-2024-26855 CVE-2024-26908 CVE-2024-26925 CVE-2024-27016 CVE-2024-27019 CVE-2024-27020 CVE-2024-27415 CVE-2024-35839 CVE-2024-35896 CVE-2024-35897 CVE-2024-35898 CVE-2024-35962 CVE-2024-36003 CVE-2024-36025 CVE-2024-37370 CVE-2024-37371 CVE-2024-37891 CVE-2024-38428 CVE-2024-38476 CVE-2024-38538 CVE-2024-38540 CVE-2024-38544 CVE-2024-38579 CVE-2024-38608 CVE-2024-39476 CVE-2024-40905 CVE-2024-40911 CVE-2024-40912 CVE-2024-40914 CVE-2024-40929 CVE-2024-40939 CVE-2024-40941 CVE-2024-40957 CVE-2024-40978 CVE-2024-40983 CVE-2024-41041 CVE-2024-41076 CVE-2024-41090 CVE-2024-41091 CVE-2024-42110 CVE-2024-42152 1.1.8. Logging 5.8.11 This release includes OpenShift Logging Bug Fix Release 5.8.11 and OpenShift Logging Bug Fix Release 5.8.11 . 1.1.8.1. Bug fixes Before this update, the TLS section was added without verifying the broker URL schema, leading to SSL connection errors if the URLs did not start with tls . With this update, the TLS section is added only if broker URLs start with tls , preventing SSL connection errors. ( LOG-5139 ) Before this update, the Loki Operator did not trigger alerts when it dropped log events due to validation failures. With this update, the Loki Operator includes a new alert definition that triggers an alert if Loki drops log events due to validation failures. ( LOG-5896 ) Before this update, the 4.16 GA catalog did not include Elasticsearch Operator 5.8, preventing the installation of products like Service Mesh, Kiali, and Tracing. With this update, Elasticsearch Operator 5.8 is now available on 4.16, resolving the issue and providing support for Elasticsearch storage for these products only. ( LOG-5911 ) Before this update, duplicate conditions in the LokiStack resource status led to invalid metrics from the Loki Operator. With this update, the Operator removes duplicate conditions from the status. ( LOG-5857 ) Before this update, the Loki Operator overwrote user annotations on the LokiStack Route resource, causing customizations to drop. With this update, the Loki Operator no longer overwrites Route annotations, fixing the issue. ( LOG-5946 ) 1.1.8.2. CVEs CVE-2021-47548 CVE-2021-47596 CVE-2022-48627 CVE-2023-52638 CVE-2024-4032 CVE-2024-6409 CVE-2024-21131 CVE-2024-21138 CVE-2024-21140 CVE-2024-21144 CVE-2024-21145 CVE-2024-21147 CVE-2024-24806 CVE-2024-26783 CVE-2024-26858 CVE-2024-27397 CVE-2024-27435 CVE-2024-35235 CVE-2024-35958 CVE-2024-36270 CVE-2024-36886 CVE-2024-36904 CVE-2024-36957 CVE-2024-38473 CVE-2024-38474 CVE-2024-38475 CVE-2024-38477 CVE-2024-38543 CVE-2024-38586 CVE-2024-38593 CVE-2024-38663 CVE-2024-39573 1.1.9. Logging 5.8.10 This release includes OpenShift Logging Bug Fix Release 5.8.10 and OpenShift Logging Bug Fix Release 5.8.10 . 1.1.9.1. Known issues Before this update, when enabling retention, the Loki Operator produced an invalid configuration. As a result, Loki did not start properly. With this update, Loki pods can set retention. ( LOG-5821 ) 1.1.9.2. Bug fixes Before this update, the ClusterLogForwarder introduced an extra space in the message payload that did not follow the RFC3164 specification. With this update, the extra space has been removed, fixing the issue. ( LOG-5647 ) 1.1.9.3. CVEs CVE-2023-6597 CVE-2024-0450 CVE-2024-3651 CVE-2024-6387 CVE-2024-26735 CVE-2024-26993 CVE-2024-32002 CVE-2024-32004 CVE-2024-32020 CVE-2024-32021 CVE-2024-32465 1.1.10. Logging 5.8.9 This release includes OpenShift Logging Bug Fix Release 5.8.9 and OpenShift Logging Bug Fix Release 5.8.9 . 1.1.10.1. Bug fixes Before this update, an issue prevented selecting pods that no longer existed, even if they had generated logs. With this update, this issue has been fixed, allowing selection of such pods. ( LOG-5698 ) Before this update, LokiStack was missing a route for the Volume API, which caused the following error: 404 not found . With this update, LokiStack exposes the Volume API, resolving the issue. ( LOG-5750 ) Before this update, the Elasticsearch operator overwrote all service account annotations without considering ownership. As a result, the kube-controller-manager recreated service account secrets because it logged the link to the owning service account. With this update, the Elasticsearch operator merges annotations, resolving the issue. ( LOG-5776 ) 1.1.10.2. CVEs CVE-2023-6597 CVE-2024-0450 CVE-2024-3651 CVE-2024-6387 CVE-2024-24790 CVE-2024-26735 CVE-2024-26993 CVE-2024-32002 CVE-2024-32004 CVE-2024-32020 CVE-2024-32021 CVE-2024-32465 1.1.11. Logging 5.8.8 This release includes OpenShift Logging Bug Fix Release 5.8.8 and OpenShift Logging Bug Fix Release 5.8.8 . 1.1.11.1. Bug fixes Before this update, there was a delay in restarting Ingesters when configuring LokiStack , because the Loki Operator sets the write-ahead log replay_memory_ceiling to zero bytes for the 1x.demo size. With this update, the minimum value used for the replay_memory_ceiling has been increased to avoid delays. ( LOG-5615 ) 1.1.11.2. CVEs CVE-2020-15778 CVE-2021-43618 CVE-2023-6004 CVE-2023-6597 CVE-2023-6918 CVE-2023-7008 CVE-2024-0450 CVE-2024-2961 CVE-2024-22365 CVE-2024-25062 CVE-2024-26458 CVE-2024-26461 CVE-2024-26642 CVE-2024-26643 CVE-2024-26673 CVE-2024-26804 CVE-2024-28182 CVE-2024-32487 CVE-2024-33599 CVE-2024-33600 CVE-2024-33601 CVE-2024-33602 1.1.12. Logging 5.8.7 This release includes OpenShift Logging Bug Fix Release 5.8.7 Security Update and OpenShift Logging Bug Fix Release 5.8.7 . 1.1.12.1. Bug fixes Before this update, the elasticsearch-im-<type>-* pods failed if no <type> logs (audit, infrastructure, or application) were collected. With this update, the pods no longer fail when <type> logs are not collected. ( LOG-4949 ) Before this update, the validation feature for output config required an SSL/TLS URL, even for services such as Amazon CloudWatch or Google Cloud Logging where a URL is not needed by design. With this update, the validation logic for services without URLs are improved, and the error message is more informative. ( LOG-5467 ) Before this update, an issue in the metrics collection code of the Logging Operator caused it to report stale telemetry metrics. With this update, the Logging Operator does not report stale telemetry metrics. ( LOG-5471 ) Before this update, changes to the Logging Operator caused an error due to an incorrect configuration in the ClusterLogForwarder CR. As a result, upgrades to logging deleted the daemonset collector. With this update, the Logging Operator re-creates collector daemonsets except when a Not authorized to collect error occurs. ( LOG-5514 ) 1.1.12.2. CVEs CVE-2020-26555 CVE-2021-29390 CVE-2022-0480 CVE-2022-38096 CVE-2022-40090 CVE-2022-45934 CVE-2022-48554 CVE-2022-48624 CVE-2023-2975 CVE-2023-3446 CVE-2023-3567 CVE-2023-3618 CVE-2023-3817 CVE-2023-4133 CVE-2023-5678 CVE-2023-6040 CVE-2023-6121 CVE-2023-6129 CVE-2023-6176 CVE-2023-6228 CVE-2023-6237 CVE-2023-6531 CVE-2023-6546 CVE-2023-6622 CVE-2023-6915 CVE-2023-6931 CVE-2023-6932 CVE-2023-7008 CVE-2023-24023 CVE-2023-25193 CVE-2023-25775 CVE-2023-28464 CVE-2023-28866 CVE-2023-31083 CVE-2023-31122 CVE-2023-37453 CVE-2023-38469 CVE-2023-38470 CVE-2023-38471 CVE-2023-38472 CVE-2023-38473 CVE-2023-39189 CVE-2023-39193 CVE-2023-39194 CVE-2023-39198 CVE-2023-40745 CVE-2023-41175 CVE-2023-42754 CVE-2023-42756 CVE-2023-43785 CVE-2023-43786 CVE-2023-43787 CVE-2023-43788 CVE-2023-43789 CVE-2023-45288 CVE-2023-45863 CVE-2023-46862 CVE-2023-47038 CVE-2023-51043 CVE-2023-51779 CVE-2023-51780 CVE-2023-52434 CVE-2023-52448 CVE-2023-52476 CVE-2023-52489 CVE-2023-52522 CVE-2023-52529 CVE-2023-52574 CVE-2023-52578 CVE-2023-52580 CVE-2023-52581 CVE-2023-52597 CVE-2023-52610 CVE-2023-52620 CVE-2024-0565 CVE-2024-0727 CVE-2024-0841 CVE-2024-1085 CVE-2024-1086 CVE-2024-21011 CVE-2024-21012 CVE-2024-21068 CVE-2024-21085 CVE-2024-21094 CVE-2024-22365 CVE-2024-25062 CVE-2024-26582 CVE-2024-26583 CVE-2024-26584 CVE-2024-26585 CVE-2024-26586 CVE-2024-26593 CVE-2024-26602 CVE-2024-26609 CVE-2024-26633 CVE-2024-27316 CVE-2024-28834 CVE-2024-28835 1.1.13. Logging 5.8.6 This release includes OpenShift Logging Bug Fix Release 5.8.6 Security Update and OpenShift Logging Bug Fix Release 5.8.6 . 1.1.13.1. Enhancements Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint used in the storage secret. With this update, the validation process ensures the S3 endpoint is a valid S3 URL, and the LokiStack status updates to indicate any invalid URLs. ( LOG-5392 ) Before this update, the Loki Operator configured Loki to use path-based style access for the Amazon Simple Storage Service (S3), which has been deprecated. With this update, the Loki Operator defaults to virtual-host style without users needing to change their configuration. ( LOG-5402 ) 1.1.13.2. Bug fixes Before this update, the Elastisearch Operator ServiceMonitor in the openshift-operators-redhat namespace used static token and certificate authority (CA) files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring specification on the ServiceMonitor configuration. With this update, the Elastisearch Operator ServiceMonitor in the openshift-operators-redhat namespace now references a service account token secret by a LocalReference object. This approach allows the User Workload Monitoring specifications in the Prometheus Operator to handle the Elastisearch Operator ServiceMonitor successfully. This enables Prometheus to scrape the Elastisearch Operator metrics. ( LOG-5164 ) Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint URL format used in the storage secret. With this update, the S3 endpoint URL goes through a validation step that reflects on the status of the LokiStack . ( LOG-5398 ) 1.1.13.3. CVEs CVE-2023-4244 CVE-2023-5363 CVE-2023-5717 CVE-2023-5981 CVE-2023-6356 CVE-2023-6535 CVE-2023-6536 CVE-2023-6606 CVE-2023-6610 CVE-2023-6817 CVE-2023-46218 CVE-2023-51042 CVE-2024-0193 CVE-2024-0553 CVE-2024-0567 CVE-2024-0646 1.1.14. Logging 5.8.5 This release includes OpenShift Logging Bug Fix Release 5.8.5 . 1.1.14.1. Bug fixes Before this update, the configuration of the Loki Operator's ServiceMonitor could match many Kubernetes services, resulting in the Loki Operator's metrics being collected multiple times. With this update, the configuration of ServiceMonitor now only matches the dedicated metrics service. ( LOG-5250 ) Before this update, the Red Hat build pipeline did not use the existing build details in Loki builds and omitted information such as revision, branch, and version. With this update, the Red Hat build pipeline now adds these details to the Loki builds, fixing the issue. ( LOG-5201 ) Before this update, the Loki Operator checked if the pods were running to decide if the LokiStack was ready. With this update, it also checks if the pods are ready, so that the readiness of the LokiStack reflects the state of its components. ( LOG-5171 ) Before this update, running a query for log metrics caused an error in the histogram. With this update, the histogram toggle function and the chart are disabled and hidden because the histogram doesn't work with log metrics. ( LOG-5044 ) Before this update, the Loki and Elasticsearch bundle had the wrong maxOpenShiftVersion , resulting in IncompatibleOperatorsInstalled alerts. With this update, including 4.16 as the maxOpenShiftVersion property in the bundle fixes the issue. ( LOG-5272 ) Before this update, the build pipeline did not include linker flags for the build date, causing Loki builds to show empty strings for buildDate and goVersion . With this update, adding the missing linker flags in the build pipeline fixes the issue. ( LOG-5274 ) Before this update, a bug in LogQL parsing left out some line filters from the query. With this update, the parsing now includes all the line filters while keeping the original query unchanged. ( LOG-5270 ) Before this update, the Loki Operator ServiceMonitor in the openshift-operators-redhat namespace used static token and CA files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring spec on the ServiceMonitor configuration. With this update, the Loki Operator ServiceMonitor in openshift-operators-redhat namespace now references a service account token secret by a LocalReference object. This approach allows the User Workload Monitoring spec in the Prometheus Operator to handle the Loki Operator ServiceMonitor successfully, enabling Prometheus to scrape the Loki Operator metrics. ( LOG-5240 ) 1.1.14.2. CVEs CVE-2023-5363 CVE-2023-5981 CVE-2023-6135 CVE-2023-46218 CVE-2023-48795 CVE-2023-51385 CVE-2024-0553 CVE-2024-0567 CVE-2024-24786 CVE-2024-28849 1.1.15. Logging 5.8.4 This release includes OpenShift Logging Bug Fix Release 5.8.4 . 1.1.15.1. Bug fixes Before this update, the developer console's logs did not account for the current namespace, resulting in query rejection for users without cluster-wide log access. With this update, all supported OCP versions ensure correct namespace inclusion. ( LOG-4905 ) Before this update, the Cluster Logging Operator deployed ClusterRoles supporting LokiStack deployments only when the default log output was LokiStack. With this update, the roles are split into two groups: read and write. The write roles deploys based on the setting of the default log storage, just like all the roles used to do before. The read roles deploys based on whether the logging console plugin is active. ( LOG-4987 ) Before this update, multiple ClusterLogForwarders defining the same input receiver name had their service endlessly reconciled because of changing ownerReferences on one service. With this update, each receiver input will have its own service named with the convention of <CLF.Name>-<input.Name> . ( LOG-5009 ) Before this update, the ClusterLogForwarder did not report errors when forwarding logs to cloudwatch without a secret. With this update, the following error message appears when forwarding logs to cloudwatch without a secret: secret must be provided for cloudwatch output . ( LOG-5021 ) Before this update, the log_forwarder_input_info included application , infrastructure , and audit input metric points. With this update, http is also added as a metric point. ( LOG-5043 ) 1.1.15.2. CVEs CVE-2021-35937 CVE-2021-35938 CVE-2021-35939 CVE-2022-3545 CVE-2022-24963 CVE-2022-36402 CVE-2022-41858 CVE-2023-2166 CVE-2023-2176 CVE-2023-3777 CVE-2023-3812 CVE-2023-4015 CVE-2023-4622 CVE-2023-4623 CVE-2023-5178 CVE-2023-5363 CVE-2023-5388 CVE-2023-5633 CVE-2023-6679 CVE-2023-7104 CVE-2023-27043 CVE-2023-38409 CVE-2023-40283 CVE-2023-42753 CVE-2023-43804 CVE-2023-45803 CVE-2023-46813 CVE-2024-20918 CVE-2024-20919 CVE-2024-20921 CVE-2024-20926 CVE-2024-20945 CVE-2024-20952 1.1.16. Logging 5.8.3 This release includes Logging Bug Fix 5.8.3 and Logging Security Fix 5.8.3 1.1.16.1. Bug fixes Before this update, when configured to read a custom S3 Certificate Authority the Loki Operator would not automatically update the configuration when the name of the ConfigMap or the contents changed. With this update, the Loki Operator is watching for changes to the ConfigMap and automatically updates the generated configuration. ( LOG-4969 ) Before this update, Loki outputs configured without a valid URL caused the collector pods to crash. With this update, outputs are subject to URL validation, resolving the issue. ( LOG-4822 ) Before this update the Cluster Logging Operator would generate collector configuration fields for outputs that did not specify a secret to use the service account bearer token. With this update, an output does not require authentication, resolving the issue. ( LOG-4962 ) Before this update, the tls.insecureSkipVerify field of an output was not set to a value of true without a secret defined. With this update, a secret is no longer required to set this value. ( LOG-4963 ) Before this update, output configurations allowed the combination of an insecure (HTTP) URL with TLS authentication. With this update, outputs configured for TLS authentication require a secure (HTTPS) URL. ( LOG-4893 ) 1.1.16.2. CVEs CVE-2021-35937 CVE-2021-35938 CVE-2021-35939 CVE-2023-7104 CVE-2023-27043 CVE-2023-48795 CVE-2023-51385 CVE-2024-0553 1.1.17. Logging 5.8.2 This release includes OpenShift Logging Bug Fix Release 5.8.2 . 1.1.17.1. Bug fixes Before this update, the LokiStack ruler pods would not format the IPv6 pod IP in HTTP URLs used for cross pod communication, causing querying rules and alerts through the Prometheus-compatible API to fail. With this update, the LokiStack ruler pods encapsulate the IPv6 pod IP in square brackets, resolving the issue. ( LOG-4890 ) Before this update, the developer console logs did not account for the current namespace, resulting in query rejection for users without cluster-wide log access. With this update, namespace inclusion has been corrected, resolving the issue. ( LOG-4947 ) Before this update, the logging view plugin of the OpenShift Container Platform web console did not allow for custom node placement and tolerations. With this update, defining custom node placements and tolerations has been added to the logging view plugin of the OpenShift Container Platform web console. ( LOG-4912 ) 1.1.17.2. CVEs CVE-2022-44638 CVE-2023-1192 CVE-2023-5345 CVE-2023-20569 CVE-2023-26159 CVE-2023-39615 CVE-2023-45871 1.1.18. Logging 5.8.1 This release includes OpenShift Logging Bug Fix Release 5.8.1 and OpenShift Logging Bug Fix Release 5.8.1 Kibana . 1.1.18.1. Enhancements 1.1.18.1.1. Log Collection With this update, while configuring Vector as a collector, you can add logic to the Red Hat OpenShift Logging Operator to use a token specified in the secret in place of the token associated with the service account. ( LOG-4780 ) With this update, the BoltDB Shipper Loki dashboards are now renamed to Index dashboards. ( LOG-4828 ) 1.1.18.2. Bug fixes Before this update, the ClusterLogForwarder created empty indices after enabling the parsing of JSON logs, even when the rollover conditions were not met. With this update, the ClusterLogForwarder skips the rollover when the write-index is empty. ( LOG-4452 ) Before this update, the Vector set the default log level incorrectly. With this update, the correct log level is set by improving the enhancement of regular expression, or regexp , for log level detection. ( LOG-4480 ) Before this update, during the process of creating index patterns, the default alias was missing from the initial index in each log output. As a result, Kibana users were unable to create index patterns by using OpenShift Elasticsearch Operator. This update adds the missing aliases to OpenShift Elasticsearch Operator, resolving the issue. Kibana users can now create index patterns that include the {app,infra,audit}-000001 indexes. ( LOG-4683 ) Before this update, Fluentd collector pods were in a CrashLoopBackOff state due to binding of the Prometheus server on IPv6 clusters. With this update, the collectors work properly on IPv6 clusters. ( LOG-4706 ) Before this update, the Red Hat OpenShift Logging Operator would undergo numerous reconciliations whenever there was a change in the ClusterLogForwarder . With this update, the Red Hat OpenShift Logging Operator disregards the status changes in the collector daemonsets that triggered the reconciliations. ( LOG-4741 ) Before this update, the Vector log collector pods were stuck in the CrashLoopBackOff state on IBM Power machines. With this update, the Vector log collector pods start successfully on IBM Power architecture machines. ( LOG-4768 ) Before this update, forwarding with a legacy forwarder to an internal LokiStack would produce SSL certificate errors using Fluentd collector pods. With this update, the log collector service account is used by default for authentication, using the associated token and ca.crt . ( LOG-4791 ) Before this update, forwarding with a legacy forwarder to an internal LokiStack would produce SSL certificate errors using Vector collector pods. With this update, the log collector service account is used by default for authentication and also using the associated token and ca.crt . ( LOG-4852 ) Before this fix, IPv6 addresses would not be parsed correctly after evaluating a host or multiple hosts for placeholders. With this update, IPv6 addresses are correctly parsed. ( LOG-4811 ) Before this update, it was necessary to create a ClusterRoleBinding to collect audit permissions for HTTP receiver inputs. With this update, it is not necessary to create the ClusterRoleBinding because the endpoint already depends upon the cluster certificate authority. ( LOG-4815 ) Before this update, the Loki Operator did not mount a custom CA bundle to the ruler pods. As a result, during the process to evaluate alerting or recording rules, object storage access failed. With this update, the Loki Operator mounts the custom CA bundle to all ruler pods. The ruler pods can download logs from object storage to evaluate alerting or recording rules. ( LOG-4836 ) Before this update, while removing the inputs.receiver section in the ClusterLogForwarder , the HTTP input services and its associated secrets were not deleted. With this update, the HTTP input resources are deleted when not needed. ( LOG-4612 ) Before this update, the ClusterLogForwarder indicated validation errors in the status, but the outputs and the pipeline status did not accurately reflect the specific issues. With this update, the pipeline status displays the validation failure reasons correctly in case of misconfigured outputs, inputs, or filters. ( LOG-4821 ) Before this update, changing a LogQL query that used controls such as time range or severity changed the label matcher operator defining it like a regular expression. With this update, regular expression operators remain unchanged when updating the query. ( LOG-4841 ) 1.1.18.3. CVEs CVE-2007-4559 CVE-2021-3468 CVE-2021-3502 CVE-2021-3826 CVE-2021-43618 CVE-2022-3523 CVE-2022-3565 CVE-2022-3594 CVE-2022-4285 CVE-2022-38457 CVE-2022-40133 CVE-2022-40982 CVE-2022-41862 CVE-2022-42895 CVE-2023-0597 CVE-2023-1073 CVE-2023-1074 CVE-2023-1075 CVE-2023-1076 CVE-2023-1079 CVE-2023-1206 CVE-2023-1249 CVE-2023-1252 CVE-2023-1652 CVE-2023-1855 CVE-2023-1981 CVE-2023-1989 CVE-2023-2731 CVE-2023-3138 CVE-2023-3141 CVE-2023-3161 CVE-2023-3212 CVE-2023-3268 CVE-2023-3316 CVE-2023-3358 CVE-2023-3576 CVE-2023-3609 CVE-2023-3772 CVE-2023-3773 CVE-2023-4016 CVE-2023-4128 CVE-2023-4155 CVE-2023-4194 CVE-2023-4206 CVE-2023-4207 CVE-2023-4208 CVE-2023-4273 CVE-2023-4641 CVE-2023-22745 CVE-2023-26545 CVE-2023-26965 CVE-2023-26966 CVE-2023-27522 CVE-2023-29491 CVE-2023-29499 CVE-2023-30456 CVE-2023-31486 CVE-2023-32324 CVE-2023-32573 CVE-2023-32611 CVE-2023-32665 CVE-2023-33203 CVE-2023-33285 CVE-2023-33951 CVE-2023-33952 CVE-2023-34241 CVE-2023-34410 CVE-2023-35825 CVE-2023-36054 CVE-2023-37369 CVE-2023-38197 CVE-2023-38545 CVE-2023-38546 CVE-2023-39191 CVE-2023-39975 CVE-2023-44487 1.1.19. Logging 5.8.0 This release includes OpenShift Logging Bug Fix Release 5.8.0 and OpenShift Logging Bug Fix Release 5.8.0 Kibana . 1.1.19.1. Deprecation notice In Logging 5.8, Elasticsearch, Fluentd, and Kibana are deprecated and are planned to be removed in Logging 6.0, which is expected to be shipped alongside a future release of OpenShift Container Platform. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. The Vector-based collector provided by the Red Hat OpenShift Logging Operator and LokiStack provided by the Loki Operator are the preferred Operators for log collection and storage. We encourage all users to adopt the Vector and Loki log stack, as this will be the stack that will be enhanced going forward. 1.1.19.2. Enhancements 1.1.19.2.1. Log Collection With this update, the LogFileMetricExporter is no longer deployed with the collector by default. You must manually create a LogFileMetricExporter custom resource (CR) to generate metrics from the logs produced by running containers. If you do not create the LogFileMetricExporter CR, you may see a No datapoints found message in the OpenShift Container Platform web console dashboard for Produced Logs . ( LOG-3819 ) With this update, you can deploy multiple, isolated, and RBAC-protected ClusterLogForwarder custom resource (CR) instances in any namespace. This allows independent groups to forward desired logs to any destination while isolating their configuration from other collector deployments. ( LOG-1343 ) Important In order to support multi-cluster log forwarding in additional namespaces other than the openshift-logging namespace, you must update the Red Hat OpenShift Logging Operator to watch all namespaces. This functionality is supported by default in new Red Hat OpenShift Logging Operator version 5.8 installations. With this update, you can use the flow control or rate limiting mechanism to limit the volume of log data that can be collected or forwarded by dropping excess log records. The input limits prevent poorly-performing containers from overloading the Logging and the output limits put a ceiling on the rate of logs shipped to a given data store. ( LOG-884 ) With this update, you can configure the log collector to look for HTTP connections and receive logs as an HTTP server, also known as a webhook. ( LOG-4562 ) With this update, you can configure audit polices to control which Kubernetes and OpenShift API server events are forwarded by the log collector. ( LOG-3982 ) 1.1.19.2.2. Log Storage With this update, LokiStack administrators can have more fine-grained control over who can access which logs by granting access to logs on a namespace basis. ( LOG-3841 ) With this update, the Loki Operator introduces PodDisruptionBudget configuration on LokiStack deployments to ensure normal operations during OpenShift Container Platform cluster restarts by keeping ingestion and the query path available. ( LOG-3839 ) With this update, the reliability of existing LokiStack installations are seamlessly improved by applying a set of default Affinity and Anti-Affinity policies. ( LOG-3840 ) With this update, you can manage zone-aware data replication as an administrator in LokiStack, in order to enhance reliability in the event of a zone failure. ( LOG-3266 ) With this update, a new supported small-scale LokiStack size of 1x.extra-small is introduced for OpenShift Container Platform clusters hosting a few workloads and smaller ingestion volumes (up to 100GB/day). ( LOG-4329 ) With this update, the LokiStack administrator has access to an official Loki dashboard to inspect the storage performance and the health of each component. ( LOG-4327 ) 1.1.19.2.3. Log Console With this update, you can enable the Logging Console Plugin when Elasticsearch is the default Log Store. ( LOG-3856 ) With this update, OpenShift Container Platform application owners can receive notifications for application log-based alerts on the OpenShift Container Platform web console Developer perspective for OpenShift Container Platform version 4.14 and later. ( LOG-3548 ) 1.1.19.3. Known Issues Currently, Splunk log forwarding might not work after upgrading to version 5.8 of the Red Hat OpenShift Logging Operator. This issue is caused by transitioning from OpenSSL version 1.1.1 to version 3.0.7. In the newer OpenSSL version, there is a default behavior change, where connections to TLS 1.2 endpoints are rejected if they do not expose the RFC 5746 extension. As a workaround, enable TLS 1.3 support on the TLS terminating load balancer in front of the Splunk HEC (HTTP Event Collector) endpoint. Splunk is a third-party system and this should be configured from the Splunk end. Currently, there is a flaw in handling multiplexed streams in the HTTP/2 protocol, where you can repeatedly make a request for a new multiplex stream and immediately send an RST_STREAM frame to cancel it. This created extra work for the server set up and tore down the streams, resulting in a denial of service due to server resource consumption. There is currently no workaround for this issue. ( LOG-4609 ) Currently, when using FluentD as the collector, the collector pod cannot start on the OpenShift Container Platform IPv6-enabled cluster. The pod logs produce the fluentd pod [error]: unexpected error error_class=SocketError error="getaddrinfo: Name or service not known error. There is currently no workaround for this issue. ( LOG-4706 ) Currently, the log alert is not available on an IPv6-enabled cluster. There is currently no workaround for this issue. ( LOG-4709 ) Currently, must-gather cannot gather any logs on a FIPS-enabled cluster, because the required OpenSSL library is not available in the cluster-logging-rhel9-operator . There is currently no workaround for this issue. ( LOG-4403 ) Currently, when deploying the logging version 5.8 on a FIPS-enabled cluster, the collector pods cannot start and are stuck in CrashLoopBackOff status, while using FluentD as a collector. There is currently no workaround for this issue. ( LOG-3933 ) 1.1.19.4. CVEs CVE-2023-40217 1.2. Logging 5.7 Note Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . 1.2.1. Logging 5.7.15 This release includes OpenShift Logging Bug Fix 5.7.15 . 1.2.1.1. Bug fixes Before this update, there was a delay in restarting Ingesters when configuring LokiStack , because the Loki Operator sets the write-ahead log replay_memory_ceiling to zero bytes for the 1x.demo size. With this update, the minimum value used for the replay_memory_ceiling has been increased to avoid delays. ( LOG-5616 ) 1.2.1.2. CVEs CVE-2019-25162 CVE-2020-15778 CVE-2020-36777 CVE-2021-43618 CVE-2021-46934 CVE-2021-47013 CVE-2021-47055 CVE-2021-47118 CVE-2021-47153 CVE-2021-47171 CVE-2021-47185 CVE-2022-4645 CVE-2022-48627 CVE-2022-48669 CVE-2023-6004 CVE-2023-6240 CVE-2023-6597 CVE-2023-6918 CVE-2023-7008 CVE-2023-43785 CVE-2023-43786 CVE-2023-43787 CVE-2023-43788 CVE-2023-43789 CVE-2023-52439 CVE-2023-52445 CVE-2023-52477 CVE-2023-52513 CVE-2023-52520 CVE-2023-52528 CVE-2023-52565 CVE-2023-52578 CVE-2023-52594 CVE-2023-52595 CVE-2023-52598 CVE-2023-52606 CVE-2023-52607 CVE-2023-52610 CVE-2024-0340 CVE-2024-0450 CVE-2024-22365 CVE-2024-23307 CVE-2024-25062 CVE-2024-25744 CVE-2024-26458 CVE-2024-26461 CVE-2024-26593 CVE-2024-26603 CVE-2024-26610 CVE-2024-26615 CVE-2024-26642 CVE-2024-26643 CVE-2024-26659 CVE-2024-26664 CVE-2024-26693 CVE-2024-26694 CVE-2024-26743 CVE-2024-26744 CVE-2024-26779 CVE-2024-26872 CVE-2024-26892 CVE-2024-26987 CVE-2024-26901 CVE-2024-26919 CVE-2024-26933 CVE-2024-26934 CVE-2024-26964 CVE-2024-26973 CVE-2024-26993 CVE-2024-27014 CVE-2024-27048 CVE-2024-27052 CVE-2024-27056 CVE-2024-27059 CVE-2024-28834 CVE-2024-33599 CVE-2024-33600 CVE-2024-33601 CVE-2024-33602 1.2.2. Logging 5.7.14 This release includes OpenShift Logging Bug Fix 5.7.14 . 1.2.2.1. Bug fixes Before this update, an issue in the metrics collection code of the Logging Operator caused it to report stale telemetry metrics. With this update, the Logging Operator does not report stale telemetry metrics. ( LOG-5472 ) 1.2.2.2. CVEs CVE-2023-45288 CVE-2023-52425 CVE-2024-2961 CVE-2024-21011 CVE-2024-21012 CVE-2024-21068 CVE-2024-21085 CVE-2024-21094 CVE-2024-28834 1.2.3. Logging 5.7.13 This release includes OpenShift Logging Bug Fix 5.7.13 . 1.2.3.1. Enhancements Before this update, the Loki Operator configured Loki to use path-based style access for the Amazon Simple Storage Service (S3), which has been deprecated. With this update, the Loki Operator defaults to virtual-host style without users needing to change their configuration. ( LOG-5403 ) Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint used in the storage secret. With this update, the validation process ensures the S3 endpoint is a valid S3 URL, and the LokiStack status updates to indicate any invalid URLs. ( LOG-5393 ) 1.2.3.2. Bug fixes Before this update, the Elastisearch Operator ServiceMonitor in the openshift-operators-redhat namespace used static token and certificate authority (CA) files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring specification on the ServiceMonitor configuration. With this update, the Elastisearch Operator ServiceMonitor in the openshift-operators-redhat namespace now references a service account token secret by a LocalReference object. This approach allows the User Workload Monitoring specifications in the Prometheus Operator to handle the Elastisearch Operator ServiceMonitor successfully. This enables Prometheus to scrape the Elastisearch Operator metrics. ( LOG-5243 ) Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint URL format used in the storage secret. With this update, the S3 endpoint URL goes through a validation step that reflects on the status of the LokiStack . ( LOG-5399 ) 1.2.3.3. CVEs CVE-2021-33631 CVE-2021-43618 CVE-2022-38096 CVE-2022-48624 CVE-2023-6546 CVE-2023-6931 CVE-2023-28322 CVE-2023-38546 CVE-2023-46218 CVE-2023-51042 CVE-2024-0565 CVE-2024-1086 1.2.4. Logging 5.7.12 This release includes OpenShift Logging Bug Fix 5.7.12 . 1.2.4.1. Bug fixes Before this update, the Loki Operator checked if the pods were running to decide if the LokiStack was ready. With this update, it also checks if the pods are ready, so that the readiness of the LokiStack reflects the state of its components. ( LOG-5172 ) Before this update, the Red Hat build pipeline didn't use the existing build details in Loki builds and omitted information such as revision, branch, and version. With this update, the Red Hat build pipeline now adds these details to the Loki builds, fixing the issue. ( LOG-5202 ) Before this update, the configuration of the Loki Operator's ServiceMonitor could match many Kubernetes services, resulting in the Loki Operator's metrics being collected multiple times. With this update, the configuration of ServiceMonitor now only matches the dedicated metrics service. ( LOG-5251 ) Before this update, the build pipeline did not include linker flags for the build date, causing Loki builds to show empty strings for buildDate and goVersion . With this update, adding the missing linker flags in the build pipeline fixes the issue. ( LOG-5275 ) Before this update, the Loki Operator ServiceMonitor in the openshift-operators-redhat namespace used static token and CA files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring spec on the ServiceMonitor configuration. With this update, the Loki Operator ServiceMonitor in openshift-operators-redhat namespace now references a service account token secret by a LocalReference object. This approach allows the User Workload Monitoring spec in the Prometheus Operator to handle the Loki Operator ServiceMonitor successfully, enabling Prometheus to scrape the Loki Operator metrics. ( LOG-5241 ) 1.2.4.2. CVEs CVE-2021-35937 CVE-2021-35938 CVE-2021-35939 CVE-2022-3545 CVE-2022-41858 CVE-2023-1073 CVE-2023-1838 CVE-2023-2166 CVE-2023-2176 CVE-2023-4623 CVE-2023-4921 CVE-2023-5717 CVE-2023-6135 CVE-2023-6356 CVE-2023-6535 CVE-2023-6536 CVE-2023-6606 CVE-2023-6610 CVE-2023-6817 CVE-2023-7104 CVE-2023-27043 CVE-2023-40283 CVE-2023-45871 CVE-2023-46813 CVE-2023-48795 CVE-2023-51385 CVE-2024-0553 CVE-2024-0646 CVE-2024-24786 1.2.5. Logging 5.7.11 This release includes Logging Bug Fix 5.7.11 . 1.2.5.1. Bug fixes Before this update, when configured to read a custom S3 Certificate Authority, the Loki Operator would not automatically update the configuration when the name of the ConfigMap object or the contents changed. With this update, the Loki Operator now watches for changes to the ConfigMap object and automatically updates the generated configuration. ( LOG-4968 ) 1.2.5.2. CVEs CVE-2023-39326 1.2.6. Logging 5.7.10 This release includes OpenShift Logging Bug Fix Release 5.7.10 . 1.2.6.1. Bug fix Before this update, the LokiStack ruler pods would not format the IPv6 pod IP in HTTP URLs used for cross pod communication, causing querying rules and alerts through the Prometheus-compatible API to fail. With this update, the LokiStack ruler pods encapsulate the IPv6 pod IP in square brackets, resolving the issue. ( LOG-4891 ) 1.2.6.2. CVEs CVE-2007-4559 CVE-2021-43975 CVE-2022-3594 CVE-2022-3640 CVE-2022-4285 CVE-2022-4744 CVE-2022-28388 CVE-2022-38457 CVE-2022-40133 CVE-2022-40982 CVE-2022-41862 CVE-2022-42895 CVE-2022-45869 CVE-2022-45887 CVE-2022-48337 CVE-2022-48339 CVE-2023-0458 CVE-2023-0590 CVE-2023-0597 CVE-2023-1073 CVE-2023-1074 CVE-2023-1075 CVE-2023-1079 CVE-2023-1118 CVE-2023-1206 CVE-2023-1252 CVE-2023-1382 CVE-2023-1855 CVE-2023-1989 CVE-2023-1998 CVE-2023-2513 CVE-2023-3138 CVE-2023-3141 CVE-2023-3161 CVE-2023-3212 CVE-2023-3268 CVE-2023-3446 CVE-2023-3609 CVE-2023-3611 CVE-2023-3772 CVE-2023-3817 CVE-2023-4016 CVE-2023-4128 CVE-2023-4132 CVE-2023-4155 CVE-2023-4206 CVE-2023-4207 CVE-2023-4208 CVE-2023-4641 CVE-2023-4732 CVE-2023-5678 CVE-2023-22745 CVE-2023-23455 CVE-2023-26545 CVE-2023-28328 CVE-2023-28772 CVE-2023-30456 CVE-2023-31084 CVE-2023-31436 CVE-2023-31486 CVE-2023-33203 CVE-2023-33951 CVE-2023-33952 CVE-2023-35823 CVE-2023-35824 CVE-2023-35825 CVE-2023-38037 CVE-2024-0443 1.2.7. Logging 5.7.9 This release includes OpenShift Logging Bug Fix Release 5.7.9 . 1.2.7.1. Bug fixes Before this fix, IPv6 addresses would not be parsed correctly after evaluating a host or multiple hosts for placeholders. With this update, IPv6 addresses are correctly parsed. ( LOG-4281 ) Before this update, the Vector failed to start on IPv4-only nodes. As a result, it failed to create a listener for its metrics endpoint with the following error: Failed to start Prometheus exporter: TCP bind failed: Address family not supported by protocol (os error 97) . With this update, the Vector operates normally on IPv4-only nodes. ( LOG-4589 ) Before this update, during the process of creating index patterns, the default alias was missing from the initial index in each log output. As a result, Kibana users were unable to create index patterns by using OpenShift Elasticsearch Operator. This update adds the missing aliases to OpenShift Elasticsearch Operator, resolving the issue. Kibana users can now create index patterns that include the {app,infra,audit}-000001 indexes. ( LOG-4806 ) Before this update, the Loki Operator did not mount a custom CA bundle to the ruler pods. As a result, during the process to evaluate alerting or recording rules, object storage access failed. With this update, the Loki Operator mounts the custom CA bundle to all ruler pods. The ruler pods can download logs from object storage to evaluate alerting or recording rules. ( LOG-4837 ) Before this update, changing a LogQL query using controls such as time range or severity changed the label matcher operator as though it was defined like a regular expression. With this update, regular expression operators remain unchanged when updating the query. ( LOG-4842 ) Before this update, the Vector collector deployments relied upon the default retry and buffering behavior. As a result, the delivery pipeline backed up trying to deliver every message when the availability of an output was unstable. With this update, the Vector collector deployments limit the number of message retries and drop messages after the threshold has been exceeded. ( LOG-4536 ) 1.2.7.2. CVEs CVE-2007-4559 CVE-2021-43975 CVE-2022-3594 CVE-2022-3640 CVE-2022-4744 CVE-2022-28388 CVE-2022-38457 CVE-2022-40133 CVE-2022-40982 CVE-2022-41862 CVE-2022-42895 CVE-2022-45869 CVE-2022-45887 CVE-2022-48337 CVE-2022-48339 CVE-2023-0458 CVE-2023-0590 CVE-2023-0597 CVE-2023-1073 CVE-2023-1074 CVE-2023-1075 CVE-2023-1079 CVE-2023-1118 CVE-2023-1206 CVE-2023-1252 CVE-2023-1382 CVE-2023-1855 CVE-2023-1981 CVE-2023-1989 CVE-2023-1998 CVE-2023-2513 CVE-2023-3138 CVE-2023-3141 CVE-2023-3161 CVE-2023-3212 CVE-2023-3268 CVE-2023-3609 CVE-2023-3611 CVE-2023-3772 CVE-2023-4016 CVE-2023-4128 CVE-2023-4132 CVE-2023-4155 CVE-2023-4206 CVE-2023-4207 CVE-2023-4208 CVE-2023-4641 CVE-2023-4732 CVE-2023-22745 CVE-2023-23455 CVE-2023-26545 CVE-2023-28328 CVE-2023-28772 CVE-2023-30456 CVE-2023-31084 CVE-2023-31436 CVE-2023-31486 CVE-2023-32324 CVE-2023-33203 CVE-2023-33951 CVE-2023-33952 CVE-2023-34241 CVE-2023-35823 CVE-2023-35824 CVE-2023-35825 1.2.8. Logging 5.7.8 This release includes OpenShift Logging Bug Fix Release 5.7.8 . 1.2.8.1. Bug fixes Before this update, there was a potential conflict when the same name was used for the outputRefs and inputRefs parameters in the ClusterLogForwarder custom resource (CR). As a result, the collector pods entered in a CrashLoopBackOff status. With this update, the output labels contain the OUTPUT_ prefix to ensure a distinction between output labels and pipeline names. ( LOG-4383 ) Before this update, while configuring the JSON log parser, if you did not set the structuredTypeKey or structuredTypeName parameters for the Cluster Logging Operator, no alert would display about an invalid configuration. With this update, the Cluster Logging Operator informs you about the configuration issue. ( LOG-4441 ) Before this update, if the hecToken key was missing or incorrect in the secret specified for a Splunk output, the validation failed because the Vector forwarded logs to Splunk without a token. With this update, if the hecToken key is missing or incorrect, the validation fails with the A non-empty hecToken entry is required error message. ( LOG-4580 ) Before this update, selecting a date from the Custom time range for logs caused an error in the web console. With this update, you can select a date from the time range model in the web console successfully. ( LOG-4684 ) 1.2.8.2. CVEs CVE-2023-40217 CVE-2023-44487 1.2.9. Logging 5.7.7 This release includes OpenShift Logging Bug Fix Release 5.7.7 . 1.2.9.1. Bug fixes Before this update, FluentD normalized the logs emitted by the EventRouter differently from Vector. With this update, the Vector produces log records in a consistent format. ( LOG-4178 ) Before this update, there was an error in the query used for the FluentD Buffer Availability graph in the metrics dashboard created by the Cluster Logging Operator as it showed the minimum buffer usage. With this update, the graph shows the maximum buffer usage and is now renamed to FluentD Buffer Usage . ( LOG-4555 ) Before this update, deploying a LokiStack on IPv6-only or dual-stack OpenShift Container Platform clusters caused the LokiStack memberlist registration to fail. As a result, the distributor pods went into a crash loop. With this update, an administrator can enable IPv6 by setting the lokistack.spec.hashRing.memberlist.enableIPv6: value to true , which resolves the issue. ( LOG-4569 ) Before this update, the log collector relied on the default configuration settings for reading the container log lines. As a result, the log collector did not read the rotated files efficiently. With this update, there is an increase in the number of bytes read, which allows the log collector to efficiently process rotated files. ( LOG-4575 ) Before this update, the unused metrics in the Event Router caused the container to fail due to excessive memory usage. With this update, there is reduction in the memory usage of the Event Router by removing the unused metrics. ( LOG-4686 ) 1.2.9.2. CVEs CVE-2023-0800 CVE-2023-0801 CVE-2023-0802 CVE-2023-0803 CVE-2023-0804 CVE-2023-2002 CVE-2023-3090 CVE-2023-3390 CVE-2023-3776 CVE-2023-4004 CVE-2023-4527 CVE-2023-4806 CVE-2023-4813 CVE-2023-4863 CVE-2023-4911 CVE-2023-5129 CVE-2023-20593 CVE-2023-29491 CVE-2023-30630 CVE-2023-35001 CVE-2023-35788 1.2.10. Logging 5.7.6 This release includes OpenShift Logging Bug Fix Release 5.7.6 . 1.2.10.1. Bug fixes Before this update, the collector relied on the default configuration settings for reading the container log lines. As a result, the collector did not read the rotated files efficiently. With this update, there is an increase in the number of bytes read, which allows the collector to efficiently process rotated files. ( LOG-4501 ) Before this update, when users pasted a URL with predefined filters, some filters did not reflect. With this update, the UI reflects all the filters in the URL. ( LOG-4459 ) Before this update, forwarding to Loki using custom labels generated an error when switching from Fluentd to Vector. With this update, the Vector configuration sanitizes labels in the same way as Fluentd to ensure the collector starts and correctly processes labels. ( LOG-4460 ) Before this update, the Observability Logs console search field did not accept special characters that it should escape. With this update, it is escaping special characters properly in the query. ( LOG-4456 ) Before this update, the following warning message appeared while sending logs to Splunk: Timestamp was not found. With this update, the change overrides the name of the log field used to retrieve the Timestamp and sends it to Splunk without warning. ( LOG-4413 ) Before this update, the CPU and memory usage of Vector was increasing over time. With this update, the Vector configuration now contains the expire_metrics_secs=60 setting to limit the lifetime of the metrics and cap the associated CPU usage and memory footprint. ( LOG-4171 ) Before this update, the LokiStack gateway cached authorized requests very broadly. As a result, this caused wrong authorization results. With this update, LokiStack gateway caches on a more fine-grained basis which resolves this issue. ( LOG-4393 ) Before this update, the Fluentd runtime image included builder tools which were unnecessary at runtime. With this update, the builder tools are removed, resolving the issue. ( LOG-4467 ) 1.2.10.2. CVEs CVE-2023-3899 CVE-2023-4456 CVE-2023-32360 CVE-2023-34969 1.2.11. Logging 5.7.4 This release includes OpenShift Logging Bug Fix Release 5.7.4 . 1.2.11.1. Bug fixes Before this update, when forwarding logs to CloudWatch, a namespaceUUID value was not appended to the logGroupName field. With this update, the namespaceUUID value is included, so a logGroupName in CloudWatch appears as logGroupName: vectorcw.b443fb9e-bd4c-4b6a-b9d3-c0097f9ed286 . ( LOG-2701 ) Before this update, when forwarding logs over HTTP to an off-cluster destination, the Vector collector was unable to authenticate to the cluster-wide HTTP proxy even though correct credentials were provided in the proxy URL. With this update, the Vector log collector can now authenticate to the cluster-wide HTTP proxy. ( LOG-3381 ) Before this update, the Operator would fail if the Fluentd collector was configured with Splunk as an output, due to this configuration being unsupported. With this update, configuration validation rejects unsupported outputs, resolving the issue. ( LOG-4237 ) Before this update, when the Vector collector was updated an enabled = true value in the TLS configuration for AWS Cloudwatch logs and the GCP Stackdriver caused a configuration error. With this update, enabled = true value will be removed for these outputs, resolving the issue. ( LOG-4242 ) Before this update, the Vector collector occasionally panicked with the following error message in its log: thread 'vector-worker' panicked at 'all branches are disabled and there is no else branch', src/kubernetes/reflector.rs:26:9 . With this update, the error has been resolved. ( LOG-4275 ) Before this update, an issue in the Loki Operator caused the alert-manager configuration for the application tenant to disappear if the Operator was configured with additional options for that tenant. With this update, the generated Loki configuration now contains both the custom and the auto-generated configuration. ( LOG-4361 ) Before this update, when multiple roles were used to authenticate using STS with AWS Cloudwatch forwarding, a recent update caused the credentials to be non-unique. With this update, multiple combinations of STS roles and static credentials can once again be used to authenticate with AWS Cloudwatch. ( LOG-4368 ) Before this update, Loki filtered label values for active streams but did not remove duplicates, making Grafana's Label Browser unusable. With this update, Loki filters out duplicate label values for active streams, resolving the issue. ( LOG-4389 ) Pipelines with no name field specified in the ClusterLogForwarder custom resource (CR) stopped working after upgrading to OpenShift Logging 5.7. With this update, the error has been resolved. ( LOG-4120 ) 1.2.11.2. CVEs CVE-2022-25883 CVE-2023-22796 1.2.12. Logging 5.7.3 This release includes OpenShift Logging Bug Fix Release 5.7.3 . 1.2.12.1. Bug fixes Before this update, when viewing logs within the OpenShift Container Platform web console, cached files caused the data to not refresh. With this update the bootstrap files are not cached, resolving the issue. ( LOG-4100 ) Before this update, the Loki Operator reset errors in a way that made identifying configuration problems difficult to troubleshoot. With this update, errors persist until the configuration error is resolved. ( LOG-4156 ) Before this update, the LokiStack ruler did not restart after changes were made to the RulerConfig custom resource (CR). With this update, the Loki Operator restarts the ruler pods after the RulerConfig CR is updated. ( LOG-4161 ) Before this update, the vector collector terminated unexpectedly when input match label values contained a / character within the ClusterLogForwarder . This update resolves the issue by quoting the match label, enabling the collector to start and collect logs. ( LOG-4176 ) Before this update, the Loki Operator terminated unexpectedly when a LokiStack CR defined tenant limits, but not global limits. With this update, the Loki Operator can process LokiStack CRs without global limits, resolving the issue. ( LOG-4198 ) Before this update, Fluentd did not send logs to an Elasticsearch cluster when the private key provided was passphrase-protected. With this update, Fluentd properly handles passphrase-protected private keys when establishing a connection with Elasticsearch. ( LOG-4258 ) Before this update, clusters with more than 8,000 namespaces caused Elasticsearch to reject queries because the list of namespaces was larger than the http.max_header_size setting. With this update, the default value for header size has been increased, resolving the issue. ( LOG-4277 ) Before this update, label values containing a / character within the ClusterLogForwarder CR would cause the collector to terminate unexpectedly. With this update, slashes are replaced with underscores, resolving the issue. ( LOG-4095 ) Before this update, the Cluster Logging Operator terminated unexpectedly when set to an unmanaged state. With this update, a check to ensure that the ClusterLogging resource is in the correct Management state before initiating the reconciliation of the ClusterLogForwarder CR, resolving the issue. ( LOG-4177 ) Before this update, when viewing logs within the OpenShift Container Platform web console, selecting a time range by dragging over the histogram didn't work on the aggregated logs view inside the pod detail. With this update, the time range can be selected by dragging on the histogram in this view. ( LOG-4108 ) Before this update, when viewing logs within the OpenShift Container Platform web console, queries longer than 30 seconds timed out. With this update, the timeout value can be configured in the configmap/logging-view-plugin. ( LOG-3498 ) Before this update, when viewing logs within the OpenShift Container Platform web console, clicking the more data available option loaded more log entries only the first time it was clicked. With this update, more entries are loaded with each click. ( OU-188 ) Before this update, when viewing logs within the OpenShift Container Platform web console, clicking the streaming option would only display the streaming logs message without showing the actual logs. With this update, both the message and the log stream are displayed correctly. ( OU-166 ) 1.2.12.2. CVEs CVE-2020-24736 CVE-2022-48281 CVE-2023-1667 CVE-2023-2283 CVE-2023-24329 CVE-2023-26115 CVE-2023-26136 CVE-2023-26604 CVE-2023-28466 1.2.13. Logging 5.7.2 This release includes OpenShift Logging Bug Fix Release 5.7.2 . 1.2.13.1. Bug fixes Before this update, it was not possible to delete the openshift-logging namespace directly due to the presence of a pending finalizer. With this update, the finalizer is no longer utilized, enabling direct deletion of the namespace. ( LOG-3316 ) Before this update, the run.sh script would display an incorrect chunk_limit_size value if it was changed according to the OpenShift Container Platform documentation. However, when setting the chunk_limit_size via the environment variable USDBUFFER_SIZE_LIMIT , the script would show the correct value. With this update, the run.sh script now consistently displays the correct chunk_limit_size value in both scenarios. ( LOG-3330 ) Before this update, the OpenShift Container Platform web console's logging view plugin did not allow for custom node placement or tolerations. This update adds the ability to define node placement and tolerations for the logging view plugin. ( LOG-3749 ) Before this update, the Cluster Logging Operator encountered an Unsupported Media Type exception when trying to send logs to DataDog via the Fluentd HTTP Plugin. With this update, users can seamlessly assign the content type for log forwarding by configuring the HTTP header Content-Type. The value provided is automatically assigned to the content_type parameter within the plugin, ensuring successful log transmission. ( LOG-3784 ) Before this update, when the detectMultilineErrors field was set to true in the ClusterLogForwarder custom resource (CR), PHP multi-line errors were recorded as separate log entries, causing the stack trace to be split across multiple messages. With this update, multi-line error detection for PHP is enabled, ensuring that the entire stack trace is included in a single log message. ( LOG-3878 ) Before this update, ClusterLogForwarder pipelines containing a space in their name caused the Vector collector pods to continuously crash. With this update, all spaces, dashes (-), and dots (.) in pipeline names are replaced with underscores (_). ( LOG-3945 ) Before this update, the log_forwarder_output metric did not include the http parameter. This update adds the missing parameter to the metric. ( LOG-3997 ) Before this update, Fluentd did not identify some multi-line JavaScript client exceptions when they ended with a colon. With this update, the Fluentd buffer name is prefixed with an underscore, resolving the issue. ( LOG-4019 ) Before this update, when configuring log forwarding to write to a Kafka output topic which matched a key in the payload, logs dropped due to an error. With this update, Fluentd's buffer name has been prefixed with an underscore, resolving the issue.( LOG-4027 ) Before this update, the LokiStack gateway returned label values for namespaces without applying the access rights of a user. With this update, the LokiStack gateway applies permissions to label value requests, resolving the issue. ( LOG-4049 ) Before this update, the Cluster Logging Operator API required a certificate to be provided by a secret when the tls.insecureSkipVerify option was set to true . With this update, the Cluster Logging Operator API no longer requires a certificate to be provided by a secret in such cases. The following configuration has been added to the Operator's CR: tls.verify_certificate = false tls.verify_hostname = false ( LOG-3445 ) Before this update, the LokiStack route configuration caused queries running longer than 30 seconds to timeout. With this update, the LokiStack global and per-tenant queryTimeout settings affect the route timeout settings, resolving the issue. ( LOG-4052 ) Before this update, a prior fix to remove defaulting of the collection.type resulted in the Operator no longer honoring the deprecated specs for resource, node selections, and tolerations. This update modifies the Operator behavior to always prefer the collection.logs spec over those of collection . This varies from behavior that allowed using both the preferred fields and deprecated fields but would ignore the deprecated fields when collection.type was populated. ( LOG-4185 ) Before this update, the Vector log collector did not generate TLS configuration for forwarding logs to multiple Kafka brokers if the broker URLs were not specified in the output. With this update, TLS configuration is generated appropriately for multiple brokers. ( LOG-4163 ) Before this update, the option to enable passphrase for log forwarding to Kafka was unavailable. This limitation presented a security risk as it could potentially expose sensitive information. With this update, users now have a seamless option to enable passphrase for log forwarding to Kafka. ( LOG-3314 ) Before this update, Vector log collector did not honor the tlsSecurityProfile settings for outgoing TLS connections. After this update, Vector handles TLS connection settings appropriately. ( LOG-4011 ) Before this update, not all available output types were included in the log_forwarder_output_info metrics. With this update, metrics contain Splunk and Google Cloud Logging data which was missing previously. ( LOG-4098 ) Before this update, when follow_inodes was set to true , the Fluentd collector could crash on file rotation. With this update, the follow_inodes setting does not crash the collector. ( LOG-4151 ) Before this update, the Fluentd collector could incorrectly close files that should be watched because of how those files were tracked. With this update, the tracking parameters have been corrected. ( LOG-4149 ) Before this update, forwarding logs with the Vector collector and naming a pipeline in the ClusterLogForwarder instance audit , application or infrastructure resulted in collector pods staying in the CrashLoopBackOff state with the following error in the collector log: ERROR vector::cli: Configuration error. error=redefinition of table transforms.audit for key transforms.audit After this update, pipeline names no longer clash with reserved input names, and pipelines can be named audit , application or infrastructure . ( LOG-4218 ) Before this update, when forwarding logs to a syslog destination with the Vector collector and setting the addLogSource flag to true , the following extra empty fields were added to the forwarded messages: namespace_name= , container_name= , and pod_name= . With this update, these fields are no longer added to journal logs. ( LOG-4219 ) Before this update, when a structuredTypeKey was not found, and a structuredTypeName was not specified, log messages were still parsed into structured object. With this update, parsing of logs is as expected. ( LOG-4220 ) 1.2.13.2. CVEs CVE-2021-26341 CVE-2021-33655 CVE-2021-33656 CVE-2022-1462 CVE-2022-1679 CVE-2022-1789 CVE-2022-2196 CVE-2022-2663 CVE-2022-3028 CVE-2022-3239 CVE-2022-3522 CVE-2022-3524 CVE-2022-3564 CVE-2022-3566 CVE-2022-3567 CVE-2022-3619 CVE-2022-3623 CVE-2022-3625 CVE-2022-3627 CVE-2022-3628 CVE-2022-3707 CVE-2022-3970 CVE-2022-4129 CVE-2022-20141 CVE-2022-25147 CVE-2022-25265 CVE-2022-30594 CVE-2022-36227 CVE-2022-39188 CVE-2022-39189 CVE-2022-41218 CVE-2022-41674 CVE-2022-42703 CVE-2022-42720 CVE-2022-42721 CVE-2022-42722 CVE-2022-43750 CVE-2022-47929 CVE-2023-0394 CVE-2023-0461 CVE-2023-1195 CVE-2023-1582 CVE-2023-2491 CVE-2023-22490 CVE-2023-23454 CVE-2023-23946 CVE-2023-25652 CVE-2023-25815 CVE-2023-27535 CVE-2023-29007 1.2.14. Logging 5.7.1 This release includes: OpenShift Logging Bug Fix Release 5.7.1 . 1.2.14.1. Bug fixes Before this update, the presence of numerous noisy messages within the Cluster Logging Operator pod logs caused reduced log readability, and increased difficulty in identifying important system events. With this update, the issue is resolved by significantly reducing the noisy messages within Cluster Logging Operator pod logs. ( LOG-3482 ) Before this update, the API server would reset the value for the CollectorSpec.Type field to vector , even when the custom resource used a different value. This update removes the default for the CollectorSpec.Type field to restore the behavior. ( LOG-4086 ) Before this update, a time range could not be selected in the OpenShift Container Platform web console by clicking and dragging over the logs histogram. With this update, clicking and dragging can be used to successfully select a time range. ( LOG-4501 ) Before this update, clicking on the Show Resources link in the OpenShift Container Platform web console did not produce any effect. With this update, the issue is resolved by fixing the functionality of the "Show Resources" link to toggle the display of resources for each log entry. ( LOG-3218 ) 1.2.14.2. CVEs CVE-2023-21930 CVE-2023-21937 CVE-2023-21938 CVE-2023-21939 CVE-2023-21954 CVE-2023-21967 CVE-2023-21968 CVE-2023-28617 1.2.15. Logging 5.7.0 This release includes OpenShift Logging Bug Fix Release 5.7.0 . 1.2.15.1. Enhancements With this update, you can enable logging to detect multi-line exceptions and reassemble them into a single log entry. To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field, with a value of true . 1.2.15.2. Known Issues None. 1.2.15.3. Bug fixes Before this update, the nodeSelector attribute for the Gateway component of the LokiStack did not impact node scheduling. With this update, the nodeSelector attribute works as expected. ( LOG-3713 ) 1.2.15.4. CVEs CVE-2023-1999 CVE-2023-28617 1.3. Logging 5.6 Note Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . 1.3.1. Logging 5.6.27 This release includes RHBA-2024:10988 . 1.3.1.1. Bug fixes None. 1.3.1.2. CVEs CVE-2018-12699 CVE-2019-12900 CVE-2024-9287 CVE-2024-10041 CVE-2024-10963 CVE-2024-11168 CVE-2024-35195 CVE-2024-47875 CVE-2024-50602 1.3.2. Logging 5.6.26 This release includes RHBA-2024:10050 . 1.3.2.1. Bug fixes None. 1.3.2.2. CVEs CVE-2022-48773 CVE-2022-48936 CVE-2023-48161 CVE-2023-52492 CVE-2024-3596 CVE-2024-5535 CVE-2024-7006 CVE-2024-21208 CVE-2024-21210 CVE-2024-21217 CVE-2024-21235 CVE-2024-24857 CVE-2024-26851 CVE-2024-26924 CVE-2024-26976 CVE-2024-27017 CVE-2024-27062 CVE-2024-35839 CVE-2024-35898 CVE-2024-35939 CVE-2024-38540 CVE-2024-38541 CVE-2024-38586 CVE-2024-38608 CVE-2024-39503 CVE-2024-40924 CVE-2024-40961 CVE-2024-40983 CVE-2024-40984 CVE-2024-41009 CVE-2024-41042 CVE-2024-41066 CVE-2024-41092 CVE-2024-41093 CVE-2024-42070 CVE-2024-42079 CVE-2024-42244 CVE-2024-42284 CVE-2024-42292 CVE-2024-42301 CVE-2024-43854 CVE-2024-43880 CVE-2024-43889 CVE-2024-43892 CVE-2024-44935 CVE-2024-44989 CVE-2024-44990 CVE-2024-45018 CVE-2024-46826 CVE-2024-47668 1.3.3. Logging 5.6.25 This release includes OpenShift Logging Bug Fix Release 5.6.25 . 1.3.3.1. Bug fixes None. 1.3.3.2. CVEs CVE-2021-46984 CVE-2021-47097 CVE-2021-47101 CVE-2021-47287 CVE-2021-47289 CVE-2021-47321 CVE-2021-47338 CVE-2021-47352 CVE-2021-47383 CVE-2021-47384 CVE-2021-47385 CVE-2021-47386 CVE-2021-47393 CVE-2021-47412 CVE-2021-47432 CVE-2021-47441 CVE-2021-47455 CVE-2021-47466 CVE-2021-47497 CVE-2021-47527 CVE-2021-47560 CVE-2021-47582 CVE-2021-47609 CVE-2022-48619 CVE-2022-48754 CVE-2022-48760 CVE-2022-48804 CVE-2022-48836 CVE-2022-48866 CVE-2023-6040 CVE-2023-37920 CVE-2023-52470 CVE-2023-52476 CVE-2023-52478 CVE-2023-52522 CVE-2023-52605 CVE-2023-52683 CVE-2023-52798 CVE-2023-52800 CVE-2023-52809 CVE-2023-52817 CVE-2023-52840 CVE-2024-2398 CVE-2024-4032 CVE-2024-5535 CVE-2024-6232 CVE-2024-6345 CVE-2024-6923 CVE-2024-23848 CVE-2024-24791 CVE-2024-26595 CVE-2024-26600 CVE-2024-26638 CVE-2024-26645 CVE-2024-26649 CVE-2024-26665 CVE-2024-26717 CVE-2024-26720 CVE-2024-26769 CVE-2024-26846 CVE-2024-26855 CVE-2024-26880 CVE-2024-26894 CVE-2024-26923 CVE-2024-26939 CVE-2024-27013 CVE-2024-27042 CVE-2024-34155 CVE-2024-34156 CVE-2024-34158 CVE-2024-35809 CVE-2024-35877 CVE-2024-35884 CVE-2024-35944 CVE-2024-47101 CVE-2024-36883 CVE-2024-36901 CVE-2024-36902 CVE-2024-36919 CVE-2024-36920 CVE-2024-36922 CVE-2024-36939 CVE-2024-36953 CVE-2024-37356 CVE-2024-38558 CVE-2024-38559 CVE-2024-38570 CVE-2024-38579 CVE-2024-38581 CVE-2024-38619 CVE-2024-39471 CVE-2024-39499 CVE-2024-39501 CVE-2024-39506 CVE-2024-40901 CVE-2024-40904 CVE-2024-40911 CVE-2024-40912 CVE-2024-40929 CVE-2024-40931 CVE-2024-40941 CVE-2024-40954 CVE-2024-40958 CVE-2024-40959 CVE-2024-40960 CVE-2024-40972 CVE-2024-40977 CVE-2024-40978 CVE-2024-40988 CVE-2024-40989 CVE-2024-40995 CVE-2024-40997 CVE-2024-40998 CVE-2024-41005 CVE-2024-41007 CVE-2024-41008 CVE-2024-41012 CVE-2024-41013 CVE-2024-41014 CVE-2024-41023 CVE-2024-41035 CVE-2024-41038 CVE-2024-41039 CVE-2024-41040 CVE-2024-41041 CVE-2024-41044 CVE-2024-41055 CVE-2024-41056 CVE-2024-41060 CVE-2024-41064 CVE-2024-41065 CVE-2024-41071 CVE-2024-41076 CVE-2024-41090 CVE-2024-41091 CVE-2024-41097 CVE-2024-42084 CVE-2024-42090 CVE-2024-42094 CVE-2024-42096 CVE-2024-42114 CVE-2024-42124 CVE-2024-42131 CVE-2024-42152 CVE-2024-42154 CVE-2024-42225 CVE-2024-42226 CVE-2024-42228 CVE-2024-42237 CVE-2024-42238 CVE-2024-42240 CVE-2024-42246 CVE-2024-42265 CVE-2024-42322 CVE-2024-43830 CVE-2024-43871 CVE-2024-45490 CVE-2024-45491 CVE-2024-45492 Note For detailed information on Red Hat security ratings, review Severity ratings . 1.3.4. Logging 5.6.24 This release includes OpenShift Logging Bug Fix Release 5.6.24 . 1.3.4.1. Bug fixes None. 1.3.4.2. CVEs CVE-2024-2398 CVE-2024-4032 CVE-2024-6104 CVE-2024-6232 CVE-2024-6345 CVE-2024-6923 CVE-2024-30203 CVE-2024-30205 CVE-2024-39331 CVE-2024-45490 CVE-2024-45491 CVE-2024-45492 Note For detailed information on Red Hat security ratings, review Severity ratings . 1.3.5. Logging 5.6.23 This release includes OpenShift Logging Bug Fix Release 5.6.23 . 1.3.5.1. Bug fixes None. 1.3.5.2. CVEs CVE-2018-15209 CVE-2021-46939 CVE-2021-47018 CVE-2021-47257 CVE-2021-47284 CVE-2021-47304 CVE-2021-47373 CVE-2021-47408 CVE-2021-47461 CVE-2021-47468 CVE-2021-47491 CVE-2021-47548 CVE-2021-47579 CVE-2021-47624 CVE-2022-48632 CVE-2022-48743 CVE-2022-48747 CVE-2022-48757 CVE-2023-6228 CVE-2023-25433 CVE-2023-28746 CVE-2023-52356 CVE-2023-52451 CVE-2023-52463 CVE-2023-52469 CVE-2023-52471 CVE-2023-52486 CVE-2023-52530 CVE-2023-52619 CVE-2023-52622 CVE-2023-52623 CVE-2023-52648 CVE-2023-52653 CVE-2023-52658 CVE-2023-52662 CVE-2023-52679 CVE-2023-52707 CVE-2023-52730 CVE-2023-52756 CVE-2023-52762 CVE-2023-52764 CVE-2023-52775 CVE-2023-52777 CVE-2023-52784 CVE-2023-52791 CVE-2023-52796 CVE-2023-52803 CVE-2023-52811 CVE-2023-52832 CVE-2023-52834 CVE-2023-52845 CVE-2023-52847 CVE-2023-52864 CVE-2024-2201 CVE-2024-2398 CVE-2024-6345 CVE-2024-21131 CVE-2024-21138 CVE-2024-21140 CVE-2024-21144 CVE-2024-21145 CVE-2024-21147 CVE-2024-21823 CVE-2024-25739 CVE-2024-26586 CVE-2024-26614 CVE-2024-26640 CVE-2024-26660 CVE-2024-26669 CVE-2024-26686 CVE-2024-26698 CVE-2024-26704 CVE-2024-26733 CVE-2024-26740 CVE-2024-26772 CVE-2024-26773 CVE-2024-26802 CVE-2024-26810 CVE-2024-26837 CVE-2024-26840 CVE-2024-26843 CVE-2024-26852 CVE-2024-26853 CVE-2024-26870 CVE-2024-26878 CVE-2024-26908 CVE-2024-26921 CVE-2024-26925 CVE-2024-26940 CVE-2024-26958 CVE-2024-26960 CVE-2024-26961 CVE-2024-27010 CVE-2024-27011 CVE-2024-27019 CVE-2024-27020 CVE-2024-27025 CVE-2024-27065 CVE-2024-27388 CVE-2024-27395 CVE-2024-27434 CVE-2024-31076 CVE-2024-33621 CVE-2024-35790 CVE-2024-35801 CVE-2024-35807 CVE-2024-35810 CVE-2024-35814 CVE-2024-35823 CVE-2024-35824 CVE-2024-35847 CVE-2024-35876 CVE-2024-35893 CVE-2024-35896 CVE-2024-35897 CVE-2024-35899 CVE-2024-35900 CVE-2024-35910 CVE-2024-35912 CVE-2024-35924 CVE-2024-35925 CVE-2024-35930 CVE-2024-35937 CVE-2024-35938 CVE-2024-35946 CVE-2024-35947 CVE-2024-35952 CVE-2024-36000 CVE-2024-36005 CVE-2024-36006 CVE-2024-36010 CVE-2024-36016 CVE-2024-36017 CVE-2024-36020 CVE-2024-36025 CVE-2024-36270 CVE-2024-36286 CVE-2024-36489 CVE-2024-36886 CVE-2024-36889 CVE-2024-36896 CVE-2024-36904 CVE-2024-36905 CVE-2024-36917 CVE-2024-36921 CVE-2024-36927 CVE-2024-36929 CVE-2024-36933 CVE-2024-36940 CVE-2024-36941 CVE-2024-36945 CVE-2024-36950 CVE-2024-36954 CVE-2024-36960 CVE-2024-36971 CVE-2024-36978 CVE-2024-36979 CVE-2024-37370 CVE-2024-37371 CVE-2024-37891 CVE-2024-38428 CVE-2024-38538 CVE-2024-38555 CVE-2024-38573 CVE-2024-38575 CVE-2024-38596 CVE-2024-38598 CVE-2024-38615 CVE-2024-38627 CVE-2024-39276 CVE-2024-39472 CVE-2024-39476 CVE-2024-39487 CVE-2024-39502 CVE-2024-40927 CVE-2024-40974 1.3.6. Logging 5.6.22 This release includes OpenShift Logging Bug Fix 5.6.22 1.3.6.1. Bug fixes Before this update, the Loki Operator overwrote user annotations on the LokiStack Route resource, causing customizations to drop. With this update, the Loki Operator no longer overwrites Route annotations, fixing the issue. ( LOG-5947 ) 1.3.6.2. CVEs CVE-2023-2953 CVE-2024-3651 CVE-2024-24806 CVE-2024-28182 CVE-2024-35235 1.3.7. Logging 5.6.21 This release includes OpenShift Logging Bug Fix 5.6.21 1.3.7.1. Bug fixes Before this update, LokiStack was missing a route for the Volume API, which caused the following error: 404 not found . With this update, LokiStack exposes the Volume API, resolving the issue. ( LOG-5751 ) 1.3.7.2. CVEs CVE-2020-26555 CVE-2021-46909 CVE-2021-46972 CVE-2021-47069 CVE-2021-47073 CVE-2021-47236 CVE-2021-47310 CVE-2021-47311 CVE-2021-47353 CVE-2021-47356 CVE-2021-47456 CVE-2021-47495 CVE-2022-48624 CVE-2023-2953 CVE-2023-5090 CVE-2023-52464 CVE-2023-52560 CVE-2023-52615 CVE-2023-52626 CVE-2023-52667 CVE-2023-52669 CVE-2023-52675 CVE-2023-52686 CVE-2023-52700 CVE-2023-52703 CVE-2023-52781 CVE-2023-52813 CVE-2023-52835 CVE-2023-52877 CVE-2023-52878 CVE-2023-52881 CVE-2024-3651 CVE-2024-24790 CVE-2024-24806 CVE-2024-26583 CVE-2024-26584 CVE-2024-26585 CVE-2024-26656 CVE-2024-26675 CVE-2024-26735 CVE-2024-26759 CVE-2024-26801 CVE-2024-26804 CVE-2024-26826 CVE-2024-26859 CVE-2024-26906 CVE-2024-26907 CVE-2024-26974 CVE-2024-26982 CVE-2024-27397 CVE-2024-27410 CVE-2024-28182 CVE-2024-32002 CVE-2024-32004 CVE-2024-32020 CVE-2024-32021 CVE-2024-32465 CVE-2024-32487 CVE-2024-35235 CVE-2024-35789 CVE-2024-35835 CVE-2024-35838 CVE-2024-35845 CVE-2024-35852 CVE-2024-35853 CVE-2024-35854 CVE-2024-35855 CVE-2024-35888 CVE-2024-35890 CVE-2024-35958 CVE-2024-35959 CVE-2024-35960 CVE-2024-36004 CVE-2024-36007 1.3.8. Logging 5.6.20 This release includes OpenShift Logging Bug Fix 5.6.20 1.3.8.1. Bug fixes Before this update, there was a delay in restarting Ingesters when configuring LokiStack , because the Loki Operator sets the write-ahead log replay_memory_ceiling to zero bytes for the 1x.demo size. With this update, the minimum value used for the replay_memory_ceiling has been increased to avoid delays. ( LOG-5617 ) 1.3.8.2. CVEs CVE-2019-25162 CVE-2020-15778 CVE-2020-36777 CVE-2021-43618 CVE-2021-46934 CVE-2021-47013 CVE-2021-47055 CVE-2021-47118 CVE-2021-47153 CVE-2021-47171 CVE-2021-47185 CVE-2022-4645 CVE-2022-48627 CVE-2022-48669 CVE-2023-6004 CVE-2023-6240 CVE-2023-6597 CVE-2023-6918 CVE-2023-7008 CVE-2023-43785 CVE-2023-43786 CVE-2023-43787 CVE-2023-43788 CVE-2023-43789 CVE-2023-52439 CVE-2023-52445 CVE-2023-52477 CVE-2023-52513 CVE-2023-52520 CVE-2023-52528 CVE-2023-52565 CVE-2023-52578 CVE-2023-52594 CVE-2023-52595 CVE-2023-52598 CVE-2023-52606 CVE-2023-52607 CVE-2023-52610 CVE-2024-0340 CVE-2024-0450 CVE-2024-22365 CVE-2024-23307 CVE-2024-25062 CVE-2024-25744 CVE-2024-26458 CVE-2024-26461 CVE-2024-26593 CVE-2024-26603 CVE-2024-26610 CVE-2024-26615 CVE-2024-26642 CVE-2024-26643 CVE-2024-26659 CVE-2024-26664 CVE-2024-26693 CVE-2024-26694 CVE-2024-26743 CVE-2024-26744 CVE-2024-26779 CVE-2024-26872 CVE-2024-26892 CVE-2024-26987 CVE-2024-26901 CVE-2024-26919 CVE-2024-26933 CVE-2024-26934 CVE-2024-26964 CVE-2024-26973 CVE-2024-26993 CVE-2024-27014 CVE-2024-27048 CVE-2024-27052 CVE-2024-27056 CVE-2024-27059 CVE-2024-28834 CVE-2024-33599 CVE-2024-33600 CVE-2024-33601 CVE-2024-33602 1.3.9. Logging 5.6.19 This release includes OpenShift Logging Bug Fix 5.6.19 1.3.9.1. Bug fixes Before this update, an issue in the metrics collection code of the Logging Operator caused it to report stale telemetry metrics. With this update, the Logging Operator does not report stale telemetry metrics. ( LOG-5529 ) 1.3.9.2. CVEs CVE-2023-45288 CVE-2023-52425 CVE-2024-2961 CVE-2024-21011 CVE-2024-21012 CVE-2024-21068 CVE-2024-21085 CVE-2024-21094 CVE-2024-28834 1.3.10. Logging 5.6.18 This release includes OpenShift Logging Bug Fix 5.6.18 1.3.10.1. Enhancements Before this update, Loki Operator set up Loki to use path-based style access for the Amazon Simple Storage Service (S3), which has been deprecated. With this update, the Loki Operator defaults to virtual-host style without users needing to change their configuration. ( LOG-5404 ) Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint used in the storage secret. With this update, the validation process ensures the S3 endpoint is a valid S3 URL, and the LokiStack status updates to indicate any invalid URLs. ( LOG-5396 ) 1.3.10.2. Bug fixes Before this update, the Elastisearch Operator ServiceMonitor in the openshift-operators-redhat namespace used static token and certificate authority (CA) files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring specification on the ServiceMonitor configuration. With this update, the Elastisearch Operator ServiceMonitor in the openshift-operators-redhat namespace now references a service account token secret by a LocalReference object. This approach allows the User Workload Monitoring specifications in the Prometheus Operator to handle the Elastisearch Operator ServiceMonitor successfully. This enables Prometheus to scrape the Elastisearch Operator metrics. ( LOG-5244 ) Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint URL format used in the storage secret. With this update, the S3 endpoint URL goes through a validation step that reflects on the status of the LokiStack . ( LOG-5400 ) 1.3.10.3. CVEs CVE-2021-33631 CVE-2021-43618 CVE-2022-38096 CVE-2022-48624 CVE-2023-6546 CVE-2023-6931 CVE-2023-28322 CVE-2023-38546 CVE-2023-46218 CVE-2023-51042 CVE-2024-0565 CVE-2024-1086 1.3.11. Logging 5.6.17 This release includes OpenShift Logging Bug Fix 5.6.17 1.3.11.1. Bug fixes Before this update, the Red Hat build pipeline did not use the existing build details in Loki builds and omitted information such as revision, branch, and version. With this update, the Red Hat build pipeline now adds these details to the Loki builds, fixing the issue. ( LOG-5203 ) Before this update, the configuration of the ServiceMonitor by Loki Operator could match many Kubernetes services, which led to Loki Operator's metrics being collected multiple times. With this update, the ServiceMonitor setup now only matches the dedicated metrics service. ( LOG-5252 ) Before this update, the build pipeline did not include linker flags for the build date, causing Loki builds to show empty strings for buildDate and goVersion . With this update, adding the missing linker flags in the build pipeline fixes the issue. ( LOG-5276 ) Before this update, the Loki Operator ServiceMonitor in the openshift-operators-redhat namespace used static token and CA files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring spec on the ServiceMonitor configuration. With this update, the Loki Operator ServiceMonitor in openshift-operators-redhat namespace now references a service account token secret by a LocalReference object. This approach allows the User Workload Monitoring spec in the Prometheus Operator to handle the Loki Operator ServiceMonitor successfully, enabling Prometheus to scrape the Loki Operator metrics. ( LOG-5242 ) 1.3.11.2. CVEs CVE-2021-35937 CVE-2021-35938 CVE-2021-35939 CVE-2024-24786 1.3.12. Logging 5.6.16 This release includes Logging Bug Fix 5.6.16 1.3.12.1. Bug fixes Before this update, when configured to read a custom S3 Certificate Authority the Loki Operator would not automatically update the configuration when the name of the ConfigMap or the contents changed. With this update, the Loki Operator is watching for changes to the ConfigMap and automatically updates the generated configuration. ( LOG-4967 ) 1.3.12.2. CVEs 1.3.13. Logging 5.6.15 This release includes OpenShift Logging Bug Fix Release 5.6.15 . 1.3.13.1. Bug fixes Before this update, the LokiStack ruler pods would not format the IPv6 pod IP in HTTP URLs used for cross pod communication, causing querying rules and alerts through the Prometheus-compatible API to fail. With this update, the LokiStack ruler pods encapsulate the IPv6 pod IP in square brackets, resolving the issue. ( LOG-4892 ) 1.3.13.2. CVEs CVE-2021-3468 CVE-2023-3446 CVE-2023-3817 CVE-2023-5678 CVE-2023-38469 CVE-2023-38470 CVE-2023-38471 CVE-2023-38472 CVE-2023-38473 1.3.14. Logging 5.6.14 This release includes OpenShift Logging Bug Fix Release 5.6.14 . 1.3.14.1. Bug fixes Before this update, during the process of creating index patterns, the default alias was missing from the initial index in each log output. As a result, Kibana users were unable to create index patterns by using OpenShift Elasticsearch Operator. This update adds the missing aliases to OpenShift Elasticsearch Operator, resolving the issue. Kibana users can now create index patterns that include the {app,infra,audit}-000001 indexes. ( LOG-4807 ) Before this update, the Loki Operator did not mount a custom CA bundle to the ruler pods. As a result, during the process to evaluate alerting or recording rules, object storage access failed. With this update, the Loki Operator mounts the custom CA bundle to all ruler pods. The ruler pods can download logs from object storage to evaluate alerting or recording rules. ( LOG-4838 ) 1.3.14.2. CVEs CVE-2007-4559 CVE-2021-43975 CVE-2022-3594 CVE-2022-3640 CVE-2022-4744 CVE-2022-28388 CVE-2022-38457 CVE-2022-40133 CVE-2022-40982 CVE-2022-41862 CVE-2022-42895 CVE-2022-45869 CVE-2022-45887 CVE-2022-48337 CVE-2022-48339 CVE-2023-0458 CVE-2023-0590 CVE-2023-0597 CVE-2023-1073 CVE-2023-1074 CVE-2023-1075 CVE-2023-1079 CVE-2023-1118 CVE-2023-1206 CVE-2023-1252 CVE-2023-1382 CVE-2023-1855 CVE-2023-1981 CVE-2023-1989 CVE-2023-1998 CVE-2023-2513 CVE-2023-3138 CVE-2023-3141 CVE-2023-3161 CVE-2023-3212 CVE-2023-3268 CVE-2023-3609 CVE-2023-3611 CVE-2023-3772 CVE-2023-4016 CVE-2023-4128 CVE-2023-4132 CVE-2023-4155 CVE-2023-4206 CVE-2023-4207 CVE-2023-4208 CVE-2023-4641 CVE-2023-4732 CVE-2023-22745 CVE-2023-23455 CVE-2023-26545 CVE-2023-28328 CVE-2023-28772 CVE-2023-30456 CVE-2023-31084 CVE-2023-31436 CVE-2023-31486 CVE-2023-32324 CVE-2023-33203 CVE-2023-33951 CVE-2023-33952 CVE-2023-34241 CVE-2023-35823 CVE-2023-35824 CVE-2023-35825 1.3.15. Logging 5.6.13 This release includes OpenShift Logging Bug Fix Release 5.6.13 . 1.3.15.1. Bug fixes None. 1.3.15.2. CVEs CVE-2023-40217 CVE-2023-44487 1.3.16. Logging 5.6.12 This release includes OpenShift Logging Bug Fix Release 5.6.12 . 1.3.16.1. Bug fixes Before this update, deploying a LokiStack on IPv6-only or dual-stack OpenShift Container Platform clusters caused the LokiStack memberlist registration to fail. As a result, the distributor pods went into a crash loop. With this update, an administrator can enable IPv6 by setting the lokistack.spec.hashRing.memberlist.enableIPv6: value to true , which resolves the issue. Currently, the log alert is not available on an IPv6-enabled cluster. ( LOG-4570 ) Before this update, there was an error in the query used for the FluentD Buffer Availability graph in the metrics dashboard created by the Cluster Logging Operator as it showed the minimum buffer usage. With this update, the graph shows the maximum buffer usage and is now renamed to FluentD Buffer Usage . ( LOG-4579 ) Before this update, the unused metrics in the Event Router caused the container to fail due to excessive memory usage. With this update, there is reduction in the memory usage of the Event Router by removing the unused metrics. ( LOG-4687 ) 1.3.16.2. CVEs CVE-2023-0800 CVE-2023-0801 CVE-2023-0802 CVE-2023-0803 CVE-2023-0804 CVE-2023-2002 CVE-2023-3090 CVE-2023-3390 CVE-2023-3776 CVE-2023-4004 CVE-2023-4527 CVE-2023-4806 CVE-2023-4813 CVE-2023-4863 CVE-2023-4911 CVE-2023-5129 CVE-2023-20593 CVE-2023-29491 CVE-2023-30630 CVE-2023-35001 CVE-2023-35788 1.3.17. Logging 5.6.11 This release includes OpenShift Logging Bug Fix Release 5.6.11 . 1.3.17.1. Bug fixes Before this update, the LokiStack gateway cached authorized requests very broadly. As a result, this caused wrong authorization results. With this update, LokiStack gateway caches on a more fine-grained basis which resolves this issue. ( LOG-4435 ) 1.3.17.2. CVEs CVE-2023-3899 CVE-2023-32360 CVE-2023-34969 1.3.18. Logging 5.6.9 This release includes OpenShift Logging Bug Fix Release 5.6.9 . 1.3.18.1. Bug fixes Before this update, when multiple roles were used to authenticate using STS with AWS Cloudwatch forwarding, a recent update caused the credentials to be non-unique. With this update, multiple combinations of STS roles and static credentials can once again be used to authenticate with AWS Cloudwatch. ( LOG-4084 ) Before this update, the Vector collector occasionally panicked with the following error message in its log: thread 'vector-worker' panicked at 'all branches are disabled and there is no else branch', src/kubernetes/reflector.rs:26:9 . With this update, the error has been resolved. ( LOG-4276 ) Before this update, Loki filtered label values for active streams but did not remove duplicates, making Grafana's Label Browser unusable. With this update, Loki filters out duplicate label values for active streams, resolving the issue. ( LOG-4390 ) 1.3.18.2. CVEs CVE-2020-24736 CVE-2022-48281 CVE-2023-1667 CVE-2023-2283 CVE-2023-24329 CVE-2023-26604 CVE-2023-28466 CVE-2023-32233 1.3.19. Logging 5.6.8 This release includes OpenShift Logging Bug Fix Release 5.6.8 . 1.3.19.1. Bug fixes Before this update, the vector collector terminated unexpectedly when input match label values contained a / character within the ClusterLogForwarder . This update resolves the issue by quoting the match label, enabling the collector to start and collect logs. ( LOG-4091 ) Before this update, when viewing logs within the OpenShift Container Platform web console, clicking the more data available option loaded more log entries only the first time it was clicked. With this update, more entries are loaded with each click. ( OU-187 ) Before this update, when viewing logs within the OpenShift Container Platform web console, clicking the streaming option would only display the streaming logs message without showing the actual logs. With this update, both the message and the log stream are displayed correctly. ( OU-189 ) Before this update, the Loki Operator reset errors in a way that made identifying configuration problems difficult to troubleshoot. With this update, errors persist until the configuration error is resolved. ( LOG-4158 ) Before this update, clusters with more than 8,000 namespaces caused Elasticsearch to reject queries because the list of namespaces was larger than the http.max_header_size setting. With this update, the default value for header size has been increased, resolving the issue. ( LOG-4278 ) 1.3.19.2. CVEs CVE-2020-24736 CVE-2022-48281 CVE-2023-1667 CVE-2023-2283 CVE-2023-24329 CVE-2023-26604 CVE-2023-28466 1.3.20. Logging 5.6.5 This release includes OpenShift Logging Bug Fix Release 5.6.5 . 1.3.20.1. Bug fixes Before this update, the template definitions prevented Elasticsearch from indexing some labels and namespace_labels, causing issues with data ingestion. With this update, the fix replaces dots and slashes in labels to ensure proper ingestion, effectively resolving the issue. ( LOG-3419 ) Before this update, if the Logs page of the OpenShift Web Console failed to connect to the LokiStack, a generic error message was displayed, providing no additional context or troubleshooting suggestions. With this update, the error message has been enhanced to include more specific details and recommendations for troubleshooting. ( LOG-3750 ) Before this update, time range formats were not validated, leading to errors selecting a custom date range. With this update, time formats are now validated, enabling users to select a valid range. If an invalid time range format is selected, an error message is displayed to the user. ( LOG-3583 ) Before this update, when searching logs in Loki, even if the length of an expression did not exceed 5120 characters, the query would fail in many cases. With this update, query authorization label matchers have been optimized, resolving the issue. ( LOG-3480 ) Before this update, the Loki Operator failed to produce a memberlist configuration that was sufficient for locating all the components when using a memberlist for private IPs. With this update, the fix ensures that the generated configuration includes the advertised port, allowing for successful lookup of all components. ( LOG-4008 ) 1.3.20.2. CVEs CVE-2022-4269 CVE-2022-4378 CVE-2023-0266 CVE-2023-0361 CVE-2023-0386 CVE-2023-27539 CVE-2023-28120 1.3.21. Logging 5.6.4 This release includes OpenShift Logging Bug Fix Release 5.6.4 . 1.3.21.1. Bug fixes Before this update, when LokiStack was deployed as the log store, the logs generated by Loki pods were collected and sent to LokiStack. With this update, the logs generated by Loki are excluded from collection and will not be stored. ( LOG-3280 ) Before this update, when the query editor on the Logs page of the OpenShift Web Console was empty, the drop-down menus did not populate. With this update, if an empty query is attempted, an error message is displayed and the drop-down menus now populate as expected. ( LOG-3454 ) Before this update, when the tls.insecureSkipVerify option was set to true , the Cluster Logging Operator would generate incorrect configuration. As a result, the operator would fail to send data to Elasticsearch when attempting to skip certificate validation. With this update, the Cluster Logging Operator generates the correct TLS configuration even when tls.insecureSkipVerify is enabled. As a result, data can be sent successfully to Elasticsearch even when attempting to skip certificate validation. ( LOG-3475 ) Before this update, when structured parsing was enabled and messages were forwarded to multiple destinations, they were not deep copied. This resulted in some of the received logs including the structured message, while others did not. With this update, the configuration generation has been modified to deep copy messages before JSON parsing. As a result, all received messages now have structured messages included, even when they are forwarded to multiple destinations. ( LOG-3640 ) Before this update, if the collection field contained {} it could result in the Operator crashing. With this update, the Operator will ignore this value, allowing the operator to continue running smoothly without interruption. ( LOG-3733 ) Before this update, the nodeSelector attribute for the Gateway component of LokiStack did not have any effect. With this update, the nodeSelector attribute functions as expected. ( LOG-3783 ) Before this update, the static LokiStack memberlist configuration relied solely on private IP networks. As a result, when the OpenShift Container Platform cluster pod network was configured with a public IP range, the LokiStack pods would crashloop. With this update, the LokiStack administrator now has the option to use the pod network for the memberlist configuration. This resolves the issue and prevents the LokiStack pods from entering a crashloop state when the OpenShift Container Platform cluster pod network is configured with a public IP range. ( LOG-3814 ) Before this update, if the tls.insecureSkipVerify field was set to true , the Cluster Logging Operator would generate an incorrect configuration. As a result, the Operator would fail to send data to Elasticsearch when attempting to skip certificate validation. With this update, the Operator generates the correct TLS configuration even when tls.insecureSkipVerify is enabled. As a result, data can be sent successfully to Elasticsearch even when attempting to skip certificate validation. ( LOG-3838 ) Before this update, if the Cluster Logging Operator (CLO) was installed without the Elasticsearch Operator, the CLO pod would continuously display an error message related to the deletion of Elasticsearch. With this update, the CLO now performs additional checks before displaying any error messages. As a result, error messages related to Elasticsearch deletion are no longer displayed in the absence of the Elasticsearch Operator.( LOG-3763 ) 1.3.21.2. CVEs CVE-2022-4304 CVE-2022-4450 CVE-2023-0215 CVE-2023-0286 CVE-2023-0767 CVE-2023-23916 1.3.22. Logging 5.6.3 This release includes OpenShift Logging Bug Fix Release 5.6.3 . 1.3.22.1. Bug fixes Before this update, the operator stored gateway tenant secret information in a config map. With this update, the operator stores this information in a secret. ( LOG-3717 ) Before this update, the Fluentd collector did not capture OAuth login events stored in /var/log/auth-server/audit.log . With this update, Fluentd captures these OAuth login events, resolving the issue. ( LOG-3729 ) 1.3.22.2. CVEs CVE-2020-10735 CVE-2021-28861 CVE-2022-2873 CVE-2022-4415 CVE-2022-40897 CVE-2022-41222 CVE-2022-43945 CVE-2022-45061 CVE-2022-48303 1.3.23. Logging 5.6.2 This release includes OpenShift Logging Bug Fix Release 5.6.2 . 1.3.23.1. Bug fixes Before this update, the collector did not set level fields correctly based on priority for systemd logs. With this update, level fields are set correctly. ( LOG-3429 ) Before this update, the Operator incorrectly generated incompatibility warnings on OpenShift Container Platform 4.12 or later. With this update, the Operator max OpenShift Container Platform version value has been corrected, resolving the issue. ( LOG-3584 ) Before this update, creating a ClusterLogForwarder custom resource (CR) with an output value of default did not generate any errors. With this update, an error warning that this value is invalid generates appropriately. ( LOG-3437 ) Before this update, when the ClusterLogForwarder custom resource (CR) had multiple pipelines configured with one output set as default , the collector pods restarted. With this update, the logic for output validation has been corrected, resolving the issue. ( LOG-3559 ) Before this update, collector pods restarted after being created. With this update, the deployed collector does not restart on its own. ( LOG-3608 ) Before this update, patch releases removed versions of the Operators from the catalog. This made installing the old versions impossible. This update changes bundle configurations so that releases of the same minor version stay in the catalog. ( LOG-3635 ) 1.3.23.2. CVEs CVE-2022-23521 CVE-2022-40303 CVE-2022-40304 CVE-2022-41903 CVE-2022-47629 CVE-2023-21835 CVE-2023-21843 1.3.24. Logging 5.6.1 This release includes OpenShift Logging Bug Fix Release 5.6.1 . 1.3.24.1. Bug fixes Before this update, the compactor would report TLS certificate errors from communications with the querier when retention was active. With this update, the compactor and querier no longer communicate erroneously over HTTP. ( LOG-3494 ) Before this update, the Loki Operator would not retry setting the status of the LokiStack CR, which caused stale status information. With this update, the Operator retries status information updates on conflict. ( LOG-3496 ) Before this update, the Loki Operator Webhook server caused TLS errors when the kube-apiserver-operator Operator checked the webhook validity. With this update, the Loki Operator Webhook PKI is managed by the Operator Lifecycle Manager (OLM), resolving the issue. ( LOG-3510 ) Before this update, the LokiStack Gateway Labels Enforcer generated parsing errors for valid LogQL queries when using combined label filters with boolean expressions. With this update, the LokiStack LogQL implementation supports label filters with boolean expression and resolves the issue. ( LOG-3441 ), ( LOG-3397 ) Before this update, records written to Elasticsearch would fail if multiple label keys had the same prefix and some keys included dots. With this update, underscores replace dots in label keys, resolving the issue. ( LOG-3463 ) Before this update, the Red Hat OpenShift Logging Operator was not available for OpenShift Container Platform 4.10 clusters because of an incompatibility between OpenShift Container Platform console and the logging-view-plugin. With this update, the plugin is properly integrated with the OpenShift Container Platform 4.10 admin console. ( LOG-3447 ) Before this update the reconciliation of the ClusterLogForwarder custom resource would incorrectly report a degraded status of pipelines that reference the default logstore. With this update, the pipeline validates properly.( LOG-3477 ) 1.3.24.2. CVEs CVE-2021-46848 CVE-2022-3821 CVE-2022-35737 CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 CVE-2022-42898 CVE-2022-43680 CVE-2021-35065 CVE-2022-46175 1.3.25. Logging 5.6.0 This release includes OpenShift Logging Release 5.6 . 1.3.25.1. Deprecation notice In logging version 5.6, Fluentd is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Fluentd, you can use Vector instead. 1.3.25.2. Enhancements With this update, Logging is compliant with OpenShift Container Platform cluster-wide cryptographic policies. ( LOG-895 ) With this update, you can declare per-tenant, per-stream, and global policies retention policies through the LokiStack custom resource, ordered by priority. ( LOG-2695 ) With this update, Splunk is an available output option for log forwarding. ( LOG-2913 ) With this update, Vector replaces Fluentd as the default Collector. ( LOG-2222 ) With this update, the Developer role can access the per-project workload logs they are assigned to within the Log Console Plugin on clusters running OpenShift Container Platform 4.11 and higher. ( LOG-3388 ) With this update, logs from any source contain a field openshift.cluster_id , the unique identifier of the cluster in which the Operator is deployed. You can view the clusterID value by using the following command: USD oc get clusterversion/version -o jsonpath='{.spec.clusterID}{"\n"}' ( LOG-2715 ) 1.3.25.3. Known Issues Before this update, Elasticsearch would reject logs if multiple label keys had the same prefix and some keys included the . character. This fixes the limitation of Elasticsearch by replacing . in the label keys with _ . As a workaround for this issue, remove the labels that cause errors, or add a namespace to the label. ( LOG-3463 ) 1.3.25.4. Bug fixes Before this update, if you deleted the Kibana Custom Resource, the OpenShift Container Platform web console continued displaying a link to Kibana. With this update, removing the Kibana Custom Resource also removes that link. ( LOG-2993 ) Before this update, a user was not able to view the application logs of namespaces they have access to. With this update, the Loki Operator automatically creates a cluster role and cluster role binding allowing users to read application logs. ( LOG-3072 ) Before this update, the Operator removed any custom outputs defined in the ClusterLogForwarder custom resource when using LokiStack as the default log storage. With this update, the Operator merges custom outputs with the default outputs when processing the ClusterLogForwarder custom resource. ( LOG-3090 ) Before this update, the CA key was used as the volume name for mounting the CA into Loki, causing error states when the CA Key included non-conforming characters, such as dots. With this update, the volume name is standardized to an internal string which resolves the issue. ( LOG-3331 ) Before this update, a default value set within the LokiStack Custom Resource Definition, caused an inability to create a LokiStack instance without a ReplicationFactor of 1 . With this update, the operator sets the actual value for the size used. ( LOG-3296 ) Before this update, Vector parsed the message field when JSON parsing was enabled without also defining structuredTypeKey or structuredTypeName values. With this update, a value is required for either structuredTypeKey or structuredTypeName when writing structured logs to Elasticsearch. ( LOG-3195 ) Before this update, the secret creation component of the Elasticsearch Operator modified internal secrets constantly. With this update, the existing secret is properly handled. ( LOG-3161 ) Before this update, the Operator could enter a loop of removing and recreating the collector daemonset while the Elasticsearch or Kibana deployments changed their status. With this update, a fix in the status handling of the Operator resolves the issue. ( LOG-3157 ) Before this update, Kibana had a fixed 24h OAuth cookie expiration time, which resulted in 401 errors in Kibana whenever the accessTokenInactivityTimeout field was set to a value lower than 24h . With this update, Kibana's OAuth cookie expiration time synchronizes to the accessTokenInactivityTimeout , with a default value of 24h . ( LOG-3129 ) Before this update, the Operators general pattern for reconciling resources was to try and create before attempting to get or update which would lead to constant HTTP 409 responses after creation. With this update, Operators first attempt to retrieve an object and only create or update it if it is either missing or not as specified. ( LOG-2919 ) Before this update, the .level and`.structure.level` fields in Fluentd could contain different values. With this update, the values are the same for each field. ( LOG-2819 ) Before this update, the Operator did not wait for the population of the trusted CA bundle and deployed the collector a second time once the bundle updated. With this update, the Operator waits briefly to see if the bundle has been populated before it continues the collector deployment. ( LOG-2789 ) Before this update, logging telemetry info appeared twice when reviewing metrics. With this update, logging telemetry info displays as expected. ( LOG-2315 ) Before this update, Fluentd pod logs contained a warning message after enabling the JSON parsing addition. With this update, that warning message does not appear. ( LOG-1806 ) Before this update, the must-gather script did not complete because oc needs a folder with write permission to build its cache. With this update, oc has write permissions to a folder, and the must-gather script completes successfully. ( LOG-3446 ) Before this update the log collector SCC could be superseded by other SCCs on the cluster, rendering the collector unusable. This update sets the priority of the log collector SCC so that it takes precedence over the others. ( LOG-3235 ) Before this update, Vector was missing the field sequence , which was added to fluentd as a way to deal with a lack of actual nanoseconds precision. With this update, the field openshift.sequence has been added to the event logs. ( LOG-3106 ) 1.3.25.5. CVEs CVE-2020-36518 CVE-2021-46848 CVE-2022-2879 CVE-2022-2880 CVE-2022-27664 CVE-2022-32190 CVE-2022-35737 CVE-2022-37601 CVE-2022-41715 CVE-2022-42003 CVE-2022-42004 CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 CVE-2022-42898 CVE-2022-43680 1.4. Logging 5.5 Note Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. 1.4.1. Logging 5.5.18 This release includes OpenShift Logging Bug Fix Release 5.5.18 . 1.4.1.1. Bug fixes None. 1.4.1.2. CVEs CVE-2023-40217 CVE-2023-44487 1.4.2. Logging 5.5.17 This release includes OpenShift Logging Bug Fix Release 5.5.17 . 1.4.2.1. Bug fixes Before this update, the unused metrics in the Event Router caused the container to fail due to excessive memory usage. With this update, there is reduction in the memory usage of the Event Router by removing the unused metrics. ( LOG-4688 ) 1.4.2.2. CVEs CVE-2023-0800 CVE-2023-0801 CVE-2023-0802 CVE-2023-0803 CVE-2023-0804 CVE-2023-2002 CVE-2023-3090 CVE-2023-3341 CVE-2023-3390 CVE-2023-3776 CVE-2023-4004 CVE-2023-4527 CVE-2023-4806 CVE-2023-4813 CVE-2023-4863 CVE-2023-4911 CVE-2023-5129 CVE-2023-20593 CVE-2023-29491 CVE-2023-30630 CVE-2023-35001 CVE-2023-35788 1.4.3. Logging 5.5.16 This release includes OpenShift Logging Bug Fix Release 5.5.16 . 1.4.3.1. Bug fixes Before this update, the LokiStack gateway cached authorized requests very broadly. As a result, this caused wrong authorization results. With this update, LokiStack gateway caches on a more fine-grained basis which resolves this issue. ( LOG-4434 ) 1.4.3.2. CVEs CVE-2023-3899 CVE-2023-32360 CVE-2023-34969 1.4.4. Logging 5.5.14 This release includes OpenShift Logging Bug Fix Release 5.5.14 . 1.4.4.1. Bug fixes Before this update, the Vector collector occasionally panicked with the following error message in its log: thread 'vector-worker' panicked at 'all branches are disabled and there is no else branch', src/kubernetes/reflector.rs:26:9 . With this update, the error does not show in the Vector collector. ( LOG-4279 ) 1.4.4.2. CVEs CVE-2023-2828 1.4.5. Logging 5.5.13 This release includes OpenShift Logging Bug Fix Release 5.5.13 . 1.4.5.1. Bug fixes None. 1.4.5.2. CVEs CVE-2023-1999 CVE-2020-24736 CVE-2022-48281 CVE-2023-1667 CVE-2023-2283 CVE-2023-24329 CVE-2023-26604 CVE-2023-28466 1.4.6. Logging 5.5.12 This release includes OpenShift Logging Bug Fix Release 5.5.12 . 1.4.6.1. Bug fixes None. 1.4.6.2. CVEs CVE-2021-26341 CVE-2021-33655 CVE-2021-33656 CVE-2022-1462 CVE-2022-1679 CVE-2022-1789 CVE-2022-2196 CVE-2022-2663 CVE-2022-3028 CVE-2022-3239 CVE-2022-3522 CVE-2022-3524 CVE-2022-3564 CVE-2022-3566 CVE-2022-3567 CVE-2022-3619 CVE-2022-3623 CVE-2022-3625 CVE-2022-3627 CVE-2022-3628 CVE-2022-3707 CVE-2022-3970 CVE-2022-4129 CVE-2022-20141 CVE-2022-25147 CVE-2022-25265 CVE-2022-30594 CVE-2022-35252 CVE-2022-36227 CVE-2022-39188 CVE-2022-39189 CVE-2022-41218 CVE-2022-41674 CVE-2022-42703 CVE-2022-42720 CVE-2022-42721 CVE-2022-42722 CVE-2022-43552 CVE-2022-43750 CVE-2022-47929 CVE-2023-0394 CVE-2023-0461 CVE-2023-1195 CVE-2023-1582 CVE-2023-2491 CVE-2023-22490 CVE-2023-23454 CVE-2023-23946 CVE-2023-25652 CVE-2023-25815 CVE-2023-27535 CVE-2023-29007 1.4.7. Logging 5.5.11 This release includes OpenShift Logging Bug Fix Release 5.5.11 . 1.4.7.1. Bug fixes Before this update, a time range could not be selected in the OpenShift Container Platform web console by clicking and dragging over the logs histogram. With this update, clicking and dragging can be used to successfully select a time range. ( LOG-4102 ) Before this update, clicking on the Show Resources link in the OpenShift Container Platform web console did not produce any effect. With this update, the issue is resolved by fixing the functionality of the Show Resources link to toggle the display of resources for each log entry. ( LOG-4117 ) 1.4.7.2. CVEs CVE-2021-26341 CVE-2021-33655 CVE-2021-33656 CVE-2022-1462 CVE-2022-1679 CVE-2022-1789 CVE-2022-2196 CVE-2022-2663 CVE-2022-2795 CVE-2022-3028 CVE-2022-3239 CVE-2022-3522 CVE-2022-3524 CVE-2022-3564 CVE-2022-3566 CVE-2022-3567 CVE-2022-3619 CVE-2022-3623 CVE-2022-3625 CVE-2022-3627 CVE-2022-3628 CVE-2022-3707 CVE-2022-3970 CVE-2022-4129 CVE-2022-20141 CVE-2022-24765 CVE-2022-25265 CVE-2022-29187 CVE-2022-30594 CVE-2022-36227 CVE-2022-39188 CVE-2022-39189 CVE-2022-39253 CVE-2022-39260 CVE-2022-41218 CVE-2022-41674 CVE-2022-42703 CVE-2022-42720 CVE-2022-42721 CVE-2022-42722 CVE-2022-43750 CVE-2022-47929 CVE-2023-0394 CVE-2023-0461 CVE-2023-1195 CVE-2023-1582 CVE-2023-2491 CVE-2023-23454 CVE-2023-27535 1.4.8. Logging 5.5.10 This release includes OpenShift Logging Bug Fix Release 5.5.10 . 1.4.8.1. Bug fixes Before this update, the logging view plugin of the OpenShift Web Console showed only an error text when the LokiStack was not reachable. After this update the plugin shows a proper error message with details on how to fix the unreachable LokiStack. ( LOG-2874 ) 1.4.8.2. CVEs CVE-2022-4304 CVE-2022-4450 CVE-2023-0215 CVE-2023-0286 CVE-2023-0361 CVE-2023-23916 1.4.9. Logging 5.5.9 This release includes OpenShift Logging Bug Fix Release 5.5.9 . 1.4.9.1. Bug fixes Before this update, a problem with the Fluentd collector caused it to not capture OAuth login events stored in /var/log/auth-server/audit.log . This led to incomplete collection of login events from the OAuth service. With this update, the Fluentd collector now resolves this issue by capturing all login events from the OAuth service, including those stored in /var/log/auth-server/audit.log , as expected.( LOG-3730 ) Before this update, when structured parsing was enabled and messages were forwarded to multiple destinations, they were not deep copied. This resulted in some of the received logs including the structured message, while others did not. With this update, the configuration generation has been modified to deep copy messages before JSON parsing. As a result, all received logs now have structured messages included, even when they are forwarded to multiple destinations.( LOG-3767 ) 1.4.9.2. CVEs CVE-2022-4304 CVE-2022-4450 CVE-2022-41717 CVE-2023-0215 CVE-2023-0286 CVE-2023-0767 CVE-2023-23916 1.4.10. Logging 5.5.8 This release includes OpenShift Logging Bug Fix Release 5.5.8 . 1.4.10.1. Bug fixes Before this update, the priority field was missing from systemd logs due to an error in how the collector set level fields. With this update, these fields are set correctly, resolving the issue. ( LOG-3630 ) 1.4.10.2. CVEs CVE-2020-10735 CVE-2021-28861 CVE-2022-2873 CVE-2022-4415 CVE-2022-24999 CVE-2022-40897 CVE-2022-41222 CVE-2022-41717 CVE-2022-43945 CVE-2022-45061 CVE-2022-48303 1.4.11. Logging 5.5.7 This release includes OpenShift Logging Bug Fix Release 5.5.7 . 1.4.11.1. Bug fixes Before this update, the LokiStack Gateway Labels Enforcer generated parsing errors for valid LogQL queries when using combined label filters with boolean expressions. With this update, the LokiStack LogQL implementation supports label filters with boolean expression and resolves the issue. ( LOG-3534 ) Before this update, the ClusterLogForwarder custom resource (CR) did not pass TLS credentials for syslog output to Fluentd, resulting in errors during forwarding. With this update, credentials pass correctly to Fluentd, resolving the issue. ( LOG-3533 ) 1.4.11.2. CVEs CVE-2021-46848 CVE-2022-3821 CVE-2022-35737 CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 CVE-2022-42898 CVE-2022-43680 1.4.12. Logging 5.5.6 This release includes OpenShift Logging Bug Fix Release 5.5.6 . 1.4.12.1. Bug fixes Before this update, the Pod Security admission controller added the label podSecurityLabelSync = true to the openshift-logging namespace. This resulted in our specified security labels being overwritten, and as a result Collector pods would not start. With this update, the label podSecurityLabelSync = false preserves security labels. Collector pods deploy as expected. ( LOG-3340 ) Before this update, the Operator installed the console view plugin, even when it was not enabled on the cluster. This caused the Operator to crash. With this update, if an account for a cluster does not have the console view enabled, the Operator functions normally and does not install the console view. ( LOG-3407 ) Before this update, a prior fix to support a regression where the status of the Elasticsearch deployment was not being updated caused the Operator to crash unless the Red Hat Elasticsearch Operator was deployed. With this update, that fix has been reverted so the Operator is now stable but re-introduces the issue related to the reported status. ( LOG-3428 ) Before this update, the Loki Operator only deployed one replica of the LokiStack gateway regardless of the chosen stack size. With this update, the number of replicas is correctly configured according to the selected size. ( LOG-3478 ) Before this update, records written to Elasticsearch would fail if multiple label keys had the same prefix and some keys included dots. With this update, underscores replace dots in label keys, resolving the issue. ( LOG-3341 ) Before this update, the logging view plugin contained an incompatible feature for certain versions of OpenShift Container Platform. With this update, the correct release stream of the plugin resolves the issue. ( LOG-3467 ) Before this update, the reconciliation of the ClusterLogForwarder custom resource would incorrectly report a degraded status of one or more pipelines causing the collector pods to restart every 8-10 seconds. With this update, reconciliation of the ClusterLogForwarder custom resource processes correctly, resolving the issue. ( LOG-3469 ) Before this change the spec for the outputDefaults field of the ClusterLogForwarder custom resource would apply the settings to every declared Elasticsearch output type. This change corrects the behavior to match the enhancement specification where the setting specifically applies to the default managed Elasticsearch store. ( LOG-3342 ) Before this update, the OpenShift CLI (oc) must-gather script did not complete because the OpenShift CLI (oc) needs a folder with write permission to build its cache. With this update, the OpenShift CLI (oc) has write permissions to a folder, and the must-gather script completes successfully. ( LOG-3472 ) Before this update, the Loki Operator webhook server caused TLS errors. With this update, the Loki Operator webhook PKI is managed by the Operator Lifecycle Manager's dynamic webhook management resolving the issue. ( LOG-3511 ) 1.4.12.2. CVEs CVE-2021-46848 CVE-2022-2056 CVE-2022-2057 CVE-2022-2058 CVE-2022-2519 CVE-2022-2520 CVE-2022-2521 CVE-2022-2867 CVE-2022-2868 CVE-2022-2869 CVE-2022-2953 CVE-2022-2964 CVE-2022-4139 CVE-2022-35737 CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 CVE-2022-42898 CVE-2022-43680 1.4.13. Logging 5.5.5 This release includes OpenShift Logging Bug Fix Release 5.5.5 . 1.4.13.1. Bug fixes Before this update, Kibana had a fixed 24h OAuth cookie expiration time, which resulted in 401 errors in Kibana whenever the accessTokenInactivityTimeout field was set to a value lower than 24h . With this update, Kibana's OAuth cookie expiration time synchronizes to the accessTokenInactivityTimeout , with a default value of 24h . ( LOG-3305 ) Before this update, Vector parsed the message field when JSON parsing was enabled without also defining structuredTypeKey or structuredTypeName values. With this update, a value is required for either structuredTypeKey or structuredTypeName when writing structured logs to Elasticsearch. ( LOG-3284 ) Before this update, the FluentdQueueLengthIncreasing alert could fail to fire when there was a cardinality issue with the set of labels returned from this alert expression. This update reduces labels to only include those required for the alert. ( LOG-3226 ) Before this update, Loki did not have support to reach an external storage in a disconnected cluster. With this update, proxy environment variables and proxy trusted CA bundles are included in the container image to support these connections. ( LOG-2860 ) Before this update, OpenShift Container Platform web console users could not choose the ConfigMap object that includes the CA certificate for Loki, causing pods to operate without the CA. With this update, web console users can select the config map, resolving the issue. ( LOG-3310 ) Before this update, the CA key was used as volume name for mounting the CA into Loki, causing error states when the CA Key included non-conforming characters (such as dots). With this update, the volume name is standardized to an internal string which resolves the issue. ( LOG-3332 ) 1.4.13.2. CVEs CVE-2016-3709 CVE-2020-35525 CVE-2020-35527 CVE-2020-36516 CVE-2020-36558 CVE-2021-3640 CVE-2021-30002 CVE-2022-0168 CVE-2022-0561 CVE-2022-0562 CVE-2022-0617 CVE-2022-0854 CVE-2022-0865 CVE-2022-0891 CVE-2022-0908 CVE-2022-0909 CVE-2022-0924 CVE-2022-1016 CVE-2022-1048 CVE-2022-1055 CVE-2022-1184 CVE-2022-1292 CVE-2022-1304 CVE-2022-1355 CVE-2022-1586 CVE-2022-1785 CVE-2022-1852 CVE-2022-1897 CVE-2022-1927 CVE-2022-2068 CVE-2022-2078 CVE-2022-2097 CVE-2022-2509 CVE-2022-2586 CVE-2022-2639 CVE-2022-2938 CVE-2022-3515 CVE-2022-20368 CVE-2022-21499 CVE-2022-21618 CVE-2022-21619 CVE-2022-21624 CVE-2022-21626 CVE-2022-21628 CVE-2022-22624 CVE-2022-22628 CVE-2022-22629 CVE-2022-22662 CVE-2022-22844 CVE-2022-23960 CVE-2022-24448 CVE-2022-25255 CVE-2022-26373 CVE-2022-26700 CVE-2022-26709 CVE-2022-26710 CVE-2022-26716 CVE-2022-26717 CVE-2022-26719 CVE-2022-27404 CVE-2022-27405 CVE-2022-27406 CVE-2022-27950 CVE-2022-28390 CVE-2022-28893 CVE-2022-29581 CVE-2022-30293 CVE-2022-34903 CVE-2022-36946 CVE-2022-37434 CVE-2022-39399 1.4.14. Logging 5.5.4 This release includes OpenShift Logging Bug Fix Release 5.5.4 . 1.4.14.1. Bug fixes Before this update, an error in the query parser of the logging view plugin caused parts of the logs query to disappear if the query contained curly brackets {} . This made the queries invalid, leading to errors being returned for valid queries. With this update, the parser correctly handles these queries. ( LOG-3042 ) Before this update, the Operator could enter a loop of removing and recreating the collector daemonset while the Elasticsearch or Kibana deployments changed their status. With this update, a fix in the status handling of the Operator resolves the issue. ( LOG-3049 ) Before this update, no alerts were implemented to support the collector implementation of Vector. This change adds Vector alerts and deploys separate alerts, depending upon the chosen collector implementation. ( LOG-3127 ) Before this update, the secret creation component of the Elasticsearch Operator modified internal secrets constantly. With this update, the existing secret is properly handled. ( LOG-3138 ) Before this update, a prior refactoring of the logging must-gather scripts removed the expected location for the artifacts. This update reverts that change to write artifacts to the /must-gather folder. ( LOG-3213 ) Before this update, on certain clusters, the Prometheus exporter would bind on IPv4 instead of IPv6. After this update, Fluentd detects the IP version and binds to 0.0.0.0 for IPv4 or [::] for IPv6. ( LOG-3162 ) 1.4.14.2. CVEs CVE-2020-35525 CVE-2020-35527 CVE-2022-0494 CVE-2022-1353 CVE-2022-2509 CVE-2022-2588 CVE-2022-3515 CVE-2022-21618 CVE-2022-21619 CVE-2022-21624 CVE-2022-21626 CVE-2022-21628 CVE-2022-23816 CVE-2022-23825 CVE-2022-29900 CVE-2022-29901 CVE-2022-32149 CVE-2022-37434 CVE-2022-40674 1.4.15. Logging 5.5.3 This release includes OpenShift Logging Bug Fix Release 5.5.3 . 1.4.15.1. Bug fixes Before this update, log entries that had structured messages included the original message field, which made the entry larger. This update removes the message field for structured logs to reduce the increased size. ( LOG-2759 ) Before this update, the collector configuration excluded logs from collector , default-log-store , and visualization pods, but was unable to exclude logs archived in a .gz file. With this update, archived logs stored as .gz files of collector , default-log-store , and visualization pods are also excluded. ( LOG-2844 ) Before this update, when requests to an unavailable pod were sent through the gateway, no alert would warn of the disruption. With this update, individual alerts will generate if the gateway has issues completing a write or read request. ( LOG-2884 ) Before this update, pod metadata could be altered by fluent plugins because the values passed through the pipeline by reference. This update ensures each log message receives a copy of the pod metadata so each message processes independently. ( LOG-3046 ) Before this update, selecting unknown severity in the OpenShift Console Logs view excluded logs with a level=unknown value. With this update, logs without level and with level=unknown values are visible when filtering by unknown severity. ( LOG-3062 ) Before this update, log records sent to Elasticsearch had an extra field named write-index that contained the name of the index to which the logs needed to be sent. This field is not a part of the data model. After this update, this field is no longer sent. ( LOG-3075 ) With the introduction of the new built-in Pod Security Admission Controller , Pods not configured in accordance with the enforced security standards defined globally or on the namespace level cannot run. With this update, the Operator and collectors allow privileged execution and run without security audit warnings or errors. ( LOG-3077 ) Before this update, the Operator removed any custom outputs defined in the ClusterLogForwarder custom resource when using LokiStack as the default log storage. With this update, the Operator merges custom outputs with the default outputs when processing the ClusterLogForwarder custom resource. ( LOG-3095 ) 1.4.15.2. CVEs CVE-2015-20107 CVE-2022-0391 CVE-2022-2526 CVE-2022-21123 CVE-2022-21125 CVE-2022-21166 CVE-2022-29154 CVE-2022-32206 CVE-2022-32208 CVE-2022-34903 1.4.16. Logging 5.5.2 This release includes OpenShift Logging Bug Fix Release 5.5.2 . 1.4.16.1. Bug fixes Before this update, alerting rules for the Fluentd collector did not adhere to the OpenShift Container Platform monitoring style guidelines. This update modifies those alerts to include the namespace label, resolving the issue. ( LOG-1823 ) Before this update, the index management rollover script failed to generate a new index name whenever there was more than one hyphen character in the name of the index. With this update, index names generate correctly. ( LOG-2644 ) Before this update, the Kibana route was setting a caCertificate value without a certificate present. With this update, no caCertificate value is set. ( LOG-2661 ) Before this update, a change in the collector dependencies caused it to issue a warning message for unused parameters. With this update, removing unused configuration parameters resolves the issue. ( LOG-2859 ) Before this update, pods created for deployments that Loki Operator created were mistakenly scheduled on nodes with non-Linux operating systems, if such nodes were available in the cluster the Operator was running in. With this update, the Operator attaches an additional node-selector to the pod definitions which only allows scheduling the pods on Linux-based nodes. ( LOG-2895 ) Before this update, the OpenShift Console Logs view did not filter logs by severity due to a LogQL parser issue in the LokiStack gateway. With this update, a parser fix resolves the issue and the OpenShift Console Logs view can filter by severity. ( LOG-2908 ) Before this update, a refactoring of the Fluentd collector plugins removed the timestamp field for events. This update restores the timestamp field, sourced from the event's received time. ( LOG-2923 ) Before this update, absence of a level field in audit logs caused an error in vector logs. With this update, the addition of a level field in the audit log record resolves the issue. ( LOG-2961 ) Before this update, if you deleted the Kibana Custom Resource, the OpenShift Container Platform web console continued displaying a link to Kibana. With this update, removing the Kibana Custom Resource also removes that link. ( LOG-3053 ) Before this update, each rollover job created empty indices when the ClusterLogForwarder custom resource had JSON parsing defined. With this update, new indices are not empty. ( LOG-3063 ) Before this update, when the user deleted the LokiStack after an update to Loki Operator 5.5 resources originally created by Loki Operator 5.4 remained. With this update, the resources' owner-references point to the 5.5 LokiStack. ( LOG-2945 ) Before this update, a user was not able to view the application logs of namespaces they have access to. With this update, the Loki Operator automatically creates a cluster role and cluster role binding allowing users to read application logs. ( LOG-2918 ) Before this update, users with cluster-admin privileges were not able to properly view infrastructure and audit logs using the logging console. With this update, the authorization check has been extended to also recognize users in cluster-admin and dedicated-admin groups as admins. ( LOG-2970 ) 1.4.16.2. CVEs CVE-2015-20107 CVE-2022-0391 CVE-2022-21123 CVE-2022-21125 CVE-2022-21166 CVE-2022-29154 CVE-2022-32206 CVE-2022-32208 CVE-2022-34903 1.4.17. Logging 5.5.1 This release includes OpenShift Logging Bug Fix Release 5.5.1 . 1.4.17.1. Enhancements This enhancement adds an Aggregated Logs tab to the Pod Details page of the OpenShift Container Platform web console when the Logging Console Plug-in is in use. This enhancement is only available on OpenShift Container Platform 4.10 and later. ( LOG-2647 ) This enhancement adds Google Cloud Logging as an output option for log forwarding. ( LOG-1482 ) 1.4.17.2. Bug fixes Before this update, the Operator did not ensure that the pod was ready, which caused the cluster to reach an inoperable state during a cluster restart. With this update, the Operator marks new pods as ready before continuing to a new pod during a restart, which resolves the issue. ( LOG-2745 ) Before this update, Fluentd would sometimes not recognize that the Kubernetes platform rotated the log file and would no longer read log messages. This update corrects that by setting the configuration parameter suggested by the upstream development team. ( LOG-2995 ) Before this update, the addition of multi-line error detection caused internal routing to change and forward records to the wrong destination. With this update, the internal routing is correct. ( LOG-2801 ) Before this update, changing the OpenShift Container Platform web console's refresh interval created an error when the Query field was empty. With this update, changing the interval is not an available option when the Query field is empty. ( LOG-2917 ) 1.4.17.3. CVEs CVE-2022-1705 CVE-2022-2526 CVE-2022-29154 CVE-2022-30631 CVE-2022-32148 CVE-2022-32206 CVE-2022-32208 1.4.18. Logging 5.5.0 This release includes: OpenShift Logging Bug Fix Release 5.5.0 . 1.4.18.1. Enhancements With this update, you can forward structured logs from different containers within the same pod to different indices. To use this feature, you must configure the pipeline with multi-container support and annotate the pods. ( LOG-1296 ) Important JSON formatting of logs varies by application. Because creating too many indices impacts performance, limit your use of this feature to creating indices for logs that have incompatible JSON formats. Use queries to separate logs from different namespaces, or applications with compatible JSON formats. With this update, you can filter logs with Elasticsearch outputs by using the Kubernetes common labels, app.kubernetes.io/component , app.kubernetes.io/managed-by , app.kubernetes.io/part-of , and app.kubernetes.io/version . Non-Elasticsearch output types can use all labels included in kubernetes.labels . ( LOG-2388 ) With this update, clusters with AWS Security Token Service (STS) enabled may use STS authentication to forward logs to Amazon CloudWatch. ( LOG-1976 ) With this update, the Loki Operator and Vector collector move from Technical Preview to General Availability. Full feature parity with prior releases are pending, and some APIs remain Technical Previews. See the Logging with the LokiStack section for details. 1.4.18.2. Bug fixes Before this update, clusters configured to forward logs to Amazon CloudWatch wrote rejected log files to temporary storage, causing cluster instability over time. With this update, chunk backup for all storage options has been disabled, resolving the issue. ( LOG-2746 ) Before this update, the Operator was using versions of some APIs that are deprecated and planned for removal in future versions of OpenShift Container Platform. This update moves dependencies to the supported API versions. ( LOG-2656 ) Before this update, multiple ClusterLogForwarder pipelines configured for multiline error detection caused the collector to go into a crashloopbackoff error state. This update fixes the issue where multiple configuration sections had the same unique ID. ( LOG-2241 ) Before this update, the collector could not save non UTF-8 symbols to the Elasticsearch storage logs. With this update the collector encodes non UTF-8 symbols, resolving the issue. ( LOG-2203 ) Before this update, non-latin characters displayed incorrectly in Kibana. With this update, Kibana displays all valid UTF-8 symbols correctly. ( LOG-2784 ) 1.4.18.3. CVEs CVE-2021-38561 CVE-2022-1012 CVE-2022-1292 CVE-2022-1586 CVE-2022-1785 CVE-2022-1897 CVE-2022-1927 CVE-2022-2068 CVE-2022-2097 CVE-2022-21698 CVE-2022-30631 CVE-2022-32250 | [
"tls.verify_certificate = false tls.verify_hostname = false",
"ERROR vector::cli: Configuration error. error=redefinition of table transforms.audit for key transforms.audit",
"oc get clusterversion/version -o jsonpath='{.spec.clusterID}{\"\\n\"}'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/logging/release-notes |
7.12. Creating a Cloned Virtual Machine Based on a Template | 7.12. Creating a Cloned Virtual Machine Based on a Template Cloned virtual machines are based on templates and inherit the settings of the template. A cloned virtual machine does not depend on the template on which it was based after it has been created. This means the template can be deleted if no other dependencies exist. Note If you clone a virtual machine from a template, the name of the template on which that virtual machine was based is displayed in the General tab of the Edit Virtual Machine window for that virtual machine. If you change the name of that template, the name of the template in the General tab will also be updated. However, if you delete the template from the Manager, the original name of that template will be displayed instead. Cloning a Virtual Machine Based on a Template Click Compute Virtual Machines . Click New . Select the Cluster on which the virtual machine will run. Select a template from the Based on Template drop-down menu. Enter a Name , Description and any Comments . You can accept the default values inherited from the template in the rest of the fields, or change them if required. Click the Resource Allocation tab. Select the Clone radio button in the Storage Allocation area. Select the disk format from the Format drop-down list. This affects the speed of the clone operation and the amount of disk space the new virtual machine initially requires. QCOW2 (Default) Faster clone operation Optimized use of storage capacity Disk space allocated only as required Raw Slower clone operation Optimized virtual machine read and write operations All disk space requested in the template is allocated at the time of the clone operation Use the Target drop-down menu to select the storage domain on which the virtual machine's virtual disk will be stored. Click OK . Note Cloning a virtual machine may take some time. A new copy of the template's disk must be created. During this time, the virtual machine's status is first Image Locked , then Down . The virtual machine is created and displayed in the Virtual Machines tab. You can now assign users to it, and can begin using it when the clone operation is complete. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/creating_a_cloned_virtual_machine_based_on_a_template |
Chapter 21. Parameter and Configuration Files on IBM Z | Chapter 21. Parameter and Configuration Files on IBM Z The IBM Z architecture can use a customized parameter file to pass boot parameters to the kernel and the installation program. This section describes the contents of this parameter file. You need only read this section if you intend to change the shipped parameter file. You need to change the parameter file if you want to: install unattended with Kickstart. choose non-default installation settings that are not accessible through the installation program"s interactive user interface, such as rescue mode. The parameter file can be used to set up networking non-interactively before the installation program (loader and Anaconda ) starts. The kernel parameter file is limited to 895 characters plus an end-of-line character. The parameter file can be variable or fixed record format. Fixed record format increases the file size by padding each line up to the record length. Should you encounter problems with the installation program not recognizing all specified parameters in LPAR environments, you can try to put all parameters in one single line or start and end each line with a space character. The parameter file contains kernel parameters, such as ro , and parameters for the installation process, such as vncpassword=test or vnc . 21.1. Required Parameters The following parameters are required and must be included in the parameter file. They are also provided in the file generic.prm in directory images/ of the installation DVD: ro mounts the root file system, which is a RAM disk, read-only. ramdisk_size= size modifies the memory size reserved for the RAM disk to ensure that the Red Hat Enterprise Linux installation program fits within it. For example: ramdisk_size=40000 . The generic.prm file also contains the additional parameter cio_ignore=all,!condev . This setting speeds up boot and device detection on systems with many devices. The installation program transparently handles the activation of ignored devices. Important To avoid installation problems arising from cio_ignore support not being implemented throughout the entire stack, adapt the cio_ignore= parameter value to your system or remove the parameter entirely from your parameter file used for booting (IPL) the installation program. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/chap-parameter-configuration-files-s390 |
Chapter 4. About Kafka Connect | Chapter 4. About Kafka Connect Kafka Connect is an integration toolkit for streaming data between Kafka brokers and other systems. The other system is typically an external data source or target, such as a database. Kafka Connect uses a plugin architecture to provide the implementation artifacts for connectors. Plugins allow connections to other systems and provide additional configuration to manipulate data. Plugins include connectors and other components, such as data converters and transforms. A connector operates with a specific type of external system. Each connector defines a schema for its configuration. You supply the configuration to Kafka Connect to create a connector instance within Kafka Connect. Connector instances then define a set of tasks for moving data between systems. Streams for Apache Kafka operates Kafka Connect in distributed mode , distributing data streaming tasks across one or more worker pods. A Kafka Connect cluster comprises a group of worker pods. Each connector is instantiated on a single worker. Each connector comprises one or more tasks that are distributed across the group of workers. Distribution across workers permits highly scalable pipelines. Workers convert data from one format into another format that's suitable for the source or target system. Depending on the configuration of the connector instance, workers might also apply transforms (also known as Single Message Transforms, or SMTs). Transforms adjust messages, such as filtering certain data, before they are converted. Kafka Connect has some built-in transforms, but other transformations can be provided by plugins if necessary. 4.1. How Kafka Connect streams data Kafka Connect uses connector instances to integrate with other systems to stream data. Kafka Connect loads existing connector instances on start up and distributes data streaming tasks and connector configuration across worker pods. Workers run the tasks for the connector instances. Each worker runs as a separate pod to make the Kafka Connect cluster more fault tolerant. If there are more tasks than workers, workers are assigned multiple tasks. If a worker fails, its tasks are automatically assigned to active workers in the Kafka Connect cluster. The main Kafka Connect components used in streaming data are as follows: Connectors to create tasks Tasks to move data Workers to run tasks Transforms to manipulate data Converters to convert data 4.1.1. Connectors Connectors can be one of the following type: Source connectors that push data into Kafka Sink connectors that extract data out of Kafka Plugins provide the implementation for Kafka Connect to run connector instances. Connector instances create the tasks required to transfer data in and out of Kafka. The Kafka Connect runtime orchestrates the tasks to split the work required between the worker pods. MirrorMaker 2 also uses the Kafka Connect framework. In this case, the external data system is another Kafka cluster. Specialized connectors for MirrorMaker 2 manage data replication between source and target Kafka clusters. Note In addition to the MirrorMaker 2 connectors, Kafka provides two connectors as examples: FileStreamSourceConnector streams data from a file on the worker's filesystem to Kafka, reading the input file and sending each line to a given Kafka topic. FileStreamSinkConnector streams data from Kafka to the worker's filesystem, reading messages from a Kafka topic and writing a line for each in an output file. The following source connector diagram shows the process flow for a source connector that streams records from an external data system. A Kafka Connect cluster might operate source and sink connectors at the same time. Workers are running in distributed mode in the cluster. Workers can run one or more tasks for more than one connector instance. Source connector streaming data to Kafka A plugin provides the implementation artifacts for the source connector A single worker initiates the source connector instance The source connector creates the tasks to stream data Tasks run in parallel to poll the external data system and return records Transforms adjust the records, such as filtering or relabelling them Converters put the records into a format suitable for Kafka The source connector is managed using KafkaConnectors or the Kafka Connect API The following sink connector diagram shows the process flow when streaming data from Kafka to an external data system. Sink connector streaming data from Kafka A plugin provides the implementation artifacts for the sink connector A single worker initiates the sink connector instance The sink connector creates the tasks to stream data Tasks run in parallel to poll Kafka and return records Converters put the records into a format suitable for the external data system Transforms adjust the records, such as filtering or relabelling them The sink connector is managed using KafkaConnectors or the Kafka Connect API 4.1.2. Tasks Data transfer orchestrated by the Kafka Connect runtime is split into tasks that run in parallel. A task is started using the configuration supplied by a connector instance. Kafka Connect distributes the task configurations to workers, which instantiate and execute tasks. A source connector task polls the external data system and returns a list of records that a worker sends to the Kafka brokers. A sink connector task receives Kafka records from a worker for writing to the external data system. For sink connectors, the number of tasks created relates to the number of partitions being consumed. For source connectors, how the source data is partitioned is defined by the connector. You can control the maximum number of tasks that can run in parallel by setting tasksMax in the connector configuration. The connector might create fewer tasks than the maximum setting. For example, the connector might create fewer tasks if it's not possible to split the source data into that many partitions. Note In the context of Kafka Connect, a partition can mean a topic partition or a shard of data in an external system. 4.1.3. Workers Workers employ the connector configuration deployed to the Kafka Connect cluster. The configuration is stored in an internal Kafka topic used by Kafka Connect. Workers also run connectors and their tasks. A Kafka Connect cluster contains a group of workers with the same group.id . The ID identifies the cluster within Kafka. The ID is assigned in the worker configuration through the KafkaConnect resource. Worker configuration also specifies the names of internal Kafka Connect topics. The topics store connector configuration, offset, and status information. The group ID and names of these topics must also be unique to the Kafka Connect cluster. Workers are assigned one or more connector instances and tasks. The distributed approach to deploying Kafka Connect is fault tolerant and scalable. If a worker pod fails, the tasks it was running are reassigned to active workers. You can add to a group of worker pods through configuration of the replicas property in the KafkaConnect resource. 4.1.4. Transforms Kafka Connect translates and transforms external data. Single-message transforms change messages into a format suitable for the target destination. For example, a transform might insert or rename a field. Transforms can also filter and route data. Plugins contain the implementation required for workers to perform one or more transformations. Source connectors apply transforms before converting data into a format supported by Kafka. Sink connectors apply transforms after converting data into a format suitable for an external data system. A transform comprises a set of Java class files packaged in a JAR file for inclusion in a connector plugin. Kafka Connect provides a set of standard transforms, but you can also create your own. 4.1.5. Converters When a worker receives data, it converts the data into an appropriate format using a converter. You specify converters for workers in the worker config in the KafkaConnect resource. Kafka Connect can convert data to and from formats supported by Kafka, such as JSON or Avro. It also supports schemas for structuring data. If you are not converting data into a structured format, you don't need to enable schemas. Note You can also specify converters for specific connectors to override the general Kafka Connect worker configuration that applies to all workers. Additional resources Apache Kafka documentation Kafka Connect configuration of workers Synchronizing data between Kafka clusters using MirrorMaker 2 | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_on_openshift_overview/kafka-connect-components_str |
4.7. RHEA-2012:0981 - new packages: java-1.7.0-openjdk | 4.7. RHEA-2012:0981 - new packages: java-1.7.0-openjdk New java-1.7.0-openjdk packages that provide OpenJDK 7 are now available as a Technology Preview for Red Hat Enterprise Linux 6. [Updated 9 June 2012] This advisory has been updated to reflect the fact that java-1.7.0-openjdk is fully supported and no longer claims that java-1.7.0-openjdk is a Technology Preview feature. The packages included in this revised update have not been changed in any way from the packages included in the version of this advisory. The java-1.7.0-openjdk packages provide the OpenJDK 7 Java Runtime Environment and the OpenJDK 7 Java Software Development Kit. This enhancement update adds new java-1.7.0-openjdk package to Red Hat Enterprise Linux 6. (BZ# 803726 ) These packages do not replace the version of the OpenJDK (java-1.6.0-openjdk) if present. Users can safely install OpenJDK 7 in addition to OpenJDK 6. The system default version of Java can be configured using the 'alternatives' tool. All users who want to use java-1.7.0-openjdk should install these newly released packages, which add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/rhea-2012-0981 |
Chapter 14. Pre-caching images for single-node OpenShift deployments | Chapter 14. Pre-caching images for single-node OpenShift deployments In environments with limited bandwidth where you use the GitOps Zero Touch Provisioning (ZTP) solution to deploy a large number of clusters, you want to avoid downloading all the images that are required for bootstrapping and installing OpenShift Container Platform. The limited bandwidth at remote single-node OpenShift sites can cause long deployment times. The factory-precaching-cli tool allows you to pre-stage servers before shipping them to the remote site for ZTP provisioning. The factory-precaching-cli tool does the following: Downloads the RHCOS rootfs image that is required by the minimal ISO to boot. Creates a partition from the installation disk labelled as data . Formats the disk in xfs. Creates a GUID Partition Table (GPT) data partition at the end of the disk, where the size of the partition is configurable by the tool. Copies the container images required to install OpenShift Container Platform. Copies the container images required by ZTP to install OpenShift Container Platform. Optional: Copies Day-2 Operators to the partition. Important The factory-precaching-cli tool is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 14.1. Getting the factory-precaching-cli tool The factory-precaching-cli tool Go binary is publicly available in the {rds-first} tools container image . The factory-precaching-cli tool Go binary in the container image is executed on the server running an RHCOS live image using podman . If you are working in a disconnected environment or have a private registry, you need to copy the image there so you can download the image to the server. Procedure Pull the factory-precaching-cli tool image by running the following command: # podman pull quay.io/openshift-kni/telco-ran-tools:latest Verification To check that the tool is available, query the current version of the factory-precaching-cli tool Go binary: # podman run quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli -v Example output factory-precaching-cli version 20221018.120852+main.feecf17 14.2. Booting from a live operating system image You can use the factory-precaching-cli tool with to boot servers where only one disk is available and external disk drive cannot be attached to the server. Warning RHCOS requires the disk to not be in use when the disk is about to be written with an RHCOS image. Depending on the server hardware, you can mount the RHCOS live ISO on the blank server using one of the following methods: Using the Dell RACADM tool on a Dell server. Using the HPONCFG tool on a HP server. Using the Redfish BMC API. Note It is recommended to automate the mounting procedure. To automate the procedure, you need to pull the required images and host them on a local HTTP server. Prerequisites You powered up the host. You have network connectivity to the host. Procedure This example procedure uses the Redfish BMC API to mount the RHCOS live ISO. Mount the RHCOS live ISO: Check virtual media status: USD curl --globoff -H "Content-Type: application/json" -H \ "Accept: application/json" -k -X GET --user USD{username_password} \ https://USDBMC_ADDRESS/redfish/v1/Managers/Self/VirtualMedia/1 | python -m json.tool Mount the ISO file as a virtual media: USD curl --globoff -L -w "%{http_code} %{url_effective}\\n" -ku USD{username_password} -H "Content-Type: application/json" -H "Accept: application/json" -d '{"Image": "http://[USDHTTPd_IP]/RHCOS-live.iso"}' -X POST https://USDBMC_ADDRESS/redfish/v1/Managers/Self/VirtualMedia/1/Actions/VirtualMedia.InsertMedia Set the boot order to boot from the virtual media once: USD curl --globoff -L -w "%{http_code} %{url_effective}\\n" -ku USD{username_password} -H "Content-Type: application/json" -H "Accept: application/json" -d '{"Boot":{ "BootSourceOverrideEnabled": "Once", "BootSourceOverrideTarget": "Cd", "BootSourceOverrideMode": "UEFI"}}' -X PATCH https://USDBMC_ADDRESS/redfish/v1/Systems/Self Reboot and ensure that the server is booting from virtual media. Additional resources For more information about the butane utility, see About Butane . For more information about creating a custom live RHCOS ISO, see Creating a custom live RHCOS ISO for remote server access . For more information about using the Dell RACADM tool, see Integrated Dell Remote Access Controller 9 RACADM CLI Guide . For more information about using the HP HPONCFG tool, see Using HPONCFG . For more information about using the Redfish BMC API, see Booting from an HTTP-hosted ISO image using the Redfish API . 14.3. Partitioning the disk To run the full pre-caching process, you have to boot from a live ISO and use the factory-precaching-cli tool from a container image to partition and pre-cache all the artifacts required. A live ISO or RHCOS live ISO is required because the disk must not be in use when the operating system (RHCOS) is written to the device during the provisioning. Single-disk servers can also be enabled with this procedure. Prerequisites You have a disk that is not partitioned. You have access to the quay.io/openshift-kni/telco-ran-tools:latest image. You have enough storage to install OpenShift Container Platform and pre-cache the required images. Procedure Verify that the disk is cleared: # lsblk Example output NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk Erase any file system, RAID or partition table signatures from the device: # wipefs -a /dev/nvme0n1 Example output /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa Important The tool fails if the disk is not empty because it uses partition number 1 of the device for pre-caching the artifacts. 14.3.1. Creating the partition Once the device is ready, you create a single partition and a GPT partition table. The partition is automatically labelled as data and created at the end of the device. Otherwise, the partition will be overridden by the coreos-installer . Important The coreos-installer requires the partition to be created at the end of the device and to be labelled as data . Both requirements are necessary to save the partition when writing the RHCOS image to the disk. Prerequisites The container must run as privileged due to formatting host devices. You have to mount the /dev folder so that the process can be executed inside the container. Procedure In the following example, the size of the partition is 250 GiB due to allow pre-caching the DU profile for Day 2 Operators. Run the container as privileged and partition the disk: # podman run -v /dev:/dev --privileged \ --rm quay.io/openshift-kni/telco-ran-tools:latest -- \ factory-precaching-cli partition \ 1 -d /dev/nvme0n1 \ 2 -s 250 3 1 Specifies the partitioning function of the factory-precaching-cli tool. 2 Defines the root directory on the disk. 3 Defines the size of the disk in GB. Check the storage information: # lsblk Example output NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk ββnvme0n1p1 259:3 0 250G 0 part Verification You must verify that the following requirements are met: The device has a GPT partition table The partition uses the latest sectors of the device. The partition is correctly labeled as data . Query the disk status to verify that the disk is partitioned as expected: # gdisk -l /dev/nvme0n1 Example output GPT fdisk (gdisk) version 1.0.3 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/nvme0n1: 3125627568 sectors, 1.5 TiB Model: Dell Express Flash PM1725b 1.6TB SFF Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): CB5A9D44-9B3C-4174-A5C1-C64957910B61 Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 3125627534 Partitions will be aligned on 2048-sector boundaries Total free space is 2601338846 sectors (1.2 TiB) Number Start (sector) End (sector) Size Code Name 1 2601338880 3125627534 250.0 GiB 8300 data 14.3.2. Mounting the partition After verifying that the disk is partitioned correctly, you can mount the device into /mnt . Important It is recommended to mount the device into /mnt because that mounting point is used during GitOps ZTP preparation. Verify that the partition is formatted as xfs : # lsblk -f /dev/nvme0n1 Example output NAME FSTYPE LABEL UUID MOUNTPOINT nvme0n1 ββnvme0n1p1 xfs 1bee8ea4-d6cf-4339-b690-a76594794071 Mount the partition: # mount /dev/nvme0n1p1 /mnt/ Verification Check that the partition is mounted: # lsblk Example output NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk ββnvme0n1p1 259:2 0 250G 0 part /var/mnt 1 1 The mount point is /var/mnt because the /mnt folder in RHCOS is a link to /var/mnt . 14.4. Downloading the images The factory-precaching-cli tool allows you to download the following images to your partitioned server: OpenShift Container Platform images Operator images that are included in the distributed unit (DU) profile for 5G RAN sites Operator images from disconnected registries Note The list of available Operator images can vary in different OpenShift Container Platform releases. 14.4.1. Downloading with parallel workers The factory-precaching-cli tool uses parallel workers to download multiple images simultaneously. You can configure the number of workers with the --parallel or -p option. The default number is set to 80% of the available CPUs to the server. Note Your login shell may be restricted to a subset of CPUs, which reduces the CPUs available to the container. To remove this restriction, you can precede your commands with taskset 0xffffffff , for example: # taskset 0xffffffff podman run --rm quay.io/openshift-kni/telco-ran-tools:latest factory-precaching-cli download --help 14.4.2. Preparing to download the OpenShift Container Platform images To download OpenShift Container Platform container images, you need to know the multicluster engine version. When you use the --du-profile flag, you also need to specify the Red Hat Advanced Cluster Management (RHACM) version running in the hub cluster that is going to provision the single-node OpenShift. Prerequisites You have RHACM and the multicluster engine Operator installed. You partitioned the storage device. You have enough space for the images on the partitioned device. You connected the bare-metal server to the Internet. You have a valid pull secret. Procedure Check the RHACM version and the multicluster engine version by running the following commands in the hub cluster: USD oc get csv -A | grep -i advanced-cluster-management Example output open-cluster-management advanced-cluster-management.v2.6.3 Advanced Cluster Management for Kubernetes 2.6.3 advanced-cluster-management.v2.6.3 Succeeded USD oc get csv -A | grep -i multicluster-engine Example output multicluster-engine cluster-group-upgrades-operator.v0.0.3 cluster-group-upgrades-operator 0.0.3 Pending multicluster-engine multicluster-engine.v2.1.4 multicluster engine for Kubernetes 2.1.4 multicluster-engine.v2.0.3 Succeeded multicluster-engine openshift-gitops-operator.v1.5.7 Red Hat OpenShift GitOps 1.5.7 openshift-gitops-operator.v1.5.6-0.1664915551.p Succeeded multicluster-engine openshift-pipelines-operator-rh.v1.6.4 Red Hat OpenShift Pipelines 1.6.4 openshift-pipelines-operator-rh.v1.6.3 Succeeded To access the container registry, copy a valid pull secret on the server to be installed: Create the .docker folder: USD mkdir /root/.docker Copy the valid pull in the config.json file to the previously created .docker/ folder: USD cp config.json /root/.docker/config.json 1 1 /root/.docker/config.json is the default path where podman checks for the login credentials for the registry. Note If you use a different registry to pull the required artifacts, you need to copy the proper pull secret. If the local registry uses TLS, you need to include the certificates from the registry as well. 14.4.3. Downloading the OpenShift Container Platform images The factory-precaching-cli tool allows you to pre-cache all the container images required to provision a specific OpenShift Container Platform release. Procedure Pre-cache the release by running the following command: # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools -- \ factory-precaching-cli download \ 1 -r 4.18.0 \ 2 --acm-version 2.6.3 \ 3 --mce-version 2.1.4 \ 4 -f /mnt \ 5 --img quay.io/custom/repository 6 1 Specifies the downloading function of the factory-precaching-cli tool. 2 Defines the OpenShift Container Platform release version. 3 Defines the RHACM version. 4 Defines the multicluster engine version. 5 Defines the folder where you want to download the images on the disk. 6 Optional. Defines the repository where you store your additional images. These images are downloaded and pre-cached on the disk. Example output Generated /mnt/imageset.yaml Generating list of pre-cached artifacts... Processing artifact [1/176]: ocp-v4.0-art-dev@sha256_6ac2b96bf4899c01a87366fd0feae9f57b1b61878e3b5823da0c3f34f707fbf5 Processing artifact [2/176]: ocp-v4.0-art-dev@sha256_f48b68d5960ba903a0d018a10544ae08db5802e21c2fa5615a14fc58b1c1657c Processing artifact [3/176]: ocp-v4.0-art-dev@sha256_a480390e91b1c07e10091c3da2257180654f6b2a735a4ad4c3b69dbdb77bbc06 Processing artifact [4/176]: ocp-v4.0-art-dev@sha256_ecc5d8dbd77e326dba6594ff8c2d091eefbc4d90c963a9a85b0b2f0e6155f995 Processing artifact [5/176]: ocp-v4.0-art-dev@sha256_274b6d561558a2f54db08ea96df9892315bb773fc203b1dbcea418d20f4c7ad1 Processing artifact [6/176]: ocp-v4.0-art-dev@sha256_e142bf5020f5ca0d1bdda0026bf97f89b72d21a97c9cc2dc71bf85050e822bbf ... Processing artifact [175/176]: ocp-v4.0-art-dev@sha256_16cd7eda26f0fb0fc965a589e1e96ff8577e560fcd14f06b5fda1643036ed6c8 Processing artifact [176/176]: ocp-v4.0-art-dev@sha256_cf4d862b4a4170d4f611b39d06c31c97658e309724f9788e155999ae51e7188f ... Summary: Release: 4.18.0 Hub Version: 2.6.3 ACM Version: 2.6.3 MCE Version: 2.1.4 Include DU Profile: No Workers: 83 Verification Check that all the images are compressed in the target folder of server: USD ls -l /mnt 1 1 It is recommended that you pre-cache the images in the /mnt folder. Example output -rw-r--r--. 1 root root 136352323 Oct 31 15:19 ocp-v4.0-art-dev@sha256_edec37e7cd8b1611d0031d45e7958361c65e2005f145b471a8108f1b54316c07.tgz -rw-r--r--. 1 root root 156092894 Oct 31 15:33 ocp-v4.0-art-dev@sha256_ee51b062b9c3c9f4fe77bd5b3cc9a3b12355d040119a1434425a824f137c61a9.tgz -rw-r--r--. 1 root root 172297800 Oct 31 15:29 ocp-v4.0-art-dev@sha256_ef23d9057c367a36e4a5c4877d23ee097a731e1186ed28a26c8d21501cd82718.tgz -rw-r--r--. 1 root root 171539614 Oct 31 15:23 ocp-v4.0-art-dev@sha256_f0497bb63ef6834a619d4208be9da459510df697596b891c0c633da144dbb025.tgz -rw-r--r--. 1 root root 160399150 Oct 31 15:20 ocp-v4.0-art-dev@sha256_f0c339da117cde44c9aae8d0bd054bceb6f19fdb191928f6912a703182330ac2.tgz -rw-r--r--. 1 root root 175962005 Oct 31 15:17 ocp-v4.0-art-dev@sha256_f19dd2e80fb41ef31d62bb8c08b339c50d193fdb10fc39cc15b353cbbfeb9b24.tgz -rw-r--r--. 1 root root 174942008 Oct 31 15:33 ocp-v4.0-art-dev@sha256_f1dbb81fa1aa724e96dd2b296b855ff52a565fbef003d08030d63590ae6454df.tgz -rw-r--r--. 1 root root 246693315 Oct 31 15:31 ocp-v4.0-art-dev@sha256_f44dcf2c94e4fd843cbbf9b11128df2ba856cd813786e42e3da1fdfb0f6ddd01.tgz -rw-r--r--. 1 root root 170148293 Oct 31 15:00 ocp-v4.0-art-dev@sha256_f48b68d5960ba903a0d018a10544ae08db5802e21c2fa5615a14fc58b1c1657c.tgz -rw-r--r--. 1 root root 168899617 Oct 31 15:16 ocp-v4.0-art-dev@sha256_f5099b0989120a8d08a963601214b5c5cb23417a707a8624b7eb52ab788a7f75.tgz -rw-r--r--. 1 root root 176592362 Oct 31 15:05 ocp-v4.0-art-dev@sha256_f68c0e6f5e17b0b0f7ab2d4c39559ea89f900751e64b97cb42311a478338d9c3.tgz -rw-r--r--. 1 root root 157937478 Oct 31 15:37 ocp-v4.0-art-dev@sha256_f7ba33a6a9db9cfc4b0ab0f368569e19b9fa08f4c01a0d5f6a243d61ab781bd8.tgz -rw-r--r--. 1 root root 145535253 Oct 31 15:26 ocp-v4.0-art-dev@sha256_f8f098911d670287826e9499806553f7a1dd3e2b5332abbec740008c36e84de5.tgz -rw-r--r--. 1 root root 158048761 Oct 31 15:40 ocp-v4.0-art-dev@sha256_f914228ddbb99120986262168a705903a9f49724ffa958bb4bf12b2ec1d7fb47.tgz -rw-r--r--. 1 root root 167914526 Oct 31 15:37 ocp-v4.0-art-dev@sha256_fa3ca9401c7a9efda0502240aeb8d3ae2d239d38890454f17fe5158b62305010.tgz -rw-r--r--. 1 root root 164432422 Oct 31 15:24 ocp-v4.0-art-dev@sha256_fc4783b446c70df30b3120685254b40ce13ba6a2b0bf8fb1645f116cf6a392f1.tgz -rw-r--r--. 1 root root 306643814 Oct 31 15:11 troubleshoot@sha256_b86b8aea29a818a9c22944fd18243fa0347c7a2bf1ad8864113ff2bb2d8e0726.tgz 14.4.4. Downloading the Operator images You can also pre-cache Day-2 Operators used in the 5G Radio Access Network (RAN) Distributed Unit (DU) cluster configuration. The Day-2 Operators depend on the installed OpenShift Container Platform version. Important You need to include the RHACM hub and multicluster engine Operator versions by using the --acm-version and --mce-version flags so the factory-precaching-cli tool can pre-cache the appropriate containers images for RHACM and the multicluster engine Operator. Procedure Pre-cache the Operator images: # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \ 1 -r 4.18.0 \ 2 --acm-version 2.6.3 \ 3 --mce-version 2.1.4 \ 4 -f /mnt \ 5 --img quay.io/custom/repository 6 --du-profile -s 7 1 Specifies the downloading function of the factory-precaching-cli tool. 2 Defines the OpenShift Container Platform release version. 3 Defines the RHACM version. 4 Defines the multicluster engine version. 5 Defines the folder where you want to download the images on the disk. 6 Optional. Defines the repository where you store your additional images. These images are downloaded and pre-cached on the disk. 7 Specifies pre-caching the Operators included in the DU configuration. Example output Generated /mnt/imageset.yaml Generating list of pre-cached artifacts... Processing artifact [1/379]: ocp-v4.0-art-dev@sha256_7753a8d9dd5974be8c90649aadd7c914a3d8a1f1e016774c7ac7c9422e9f9958 Processing artifact [2/379]: ose-kube-rbac-proxy@sha256_c27a7c01e5968aff16b6bb6670423f992d1a1de1a16e7e260d12908d3322431c Processing artifact [3/379]: ocp-v4.0-art-dev@sha256_370e47a14c798ca3f8707a38b28cfc28114f492bb35fe1112e55d1eb51022c99 ... Processing artifact [378/379]: ose-local-storage-operator@sha256_0c81c2b79f79307305e51ce9d3837657cf9ba5866194e464b4d1b299f85034d0 Processing artifact [379/379]: multicluster-operators-channel-rhel8@sha256_c10f6bbb84fe36e05816e873a72188018856ad6aac6cc16271a1b3966f73ceb3 ... Summary: Release: 4.18.0 Hub Version: 2.6.3 ACM Version: 2.6.3 MCE Version: 2.1.4 Include DU Profile: Yes Workers: 83 14.4.5. Pre-caching custom images in disconnected environments The --generate-imageset argument stops the factory-precaching-cli tool after the ImageSetConfiguration custom resource (CR) is generated. This allows you to customize the ImageSetConfiguration CR before downloading any images. After you customized the CR, you can use the --skip-imageset argument to download the images that you specified in the ImageSetConfiguration CR. You can customize the ImageSetConfiguration CR in the following ways: Add Operators and additional images Remove Operators and additional images Change Operator and catalog sources to local or disconnected registries Procedure Pre-cache the images: # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \ 1 -r 4.18.0 \ 2 --acm-version 2.6.3 \ 3 --mce-version 2.1.4 \ 4 -f /mnt \ 5 --img quay.io/custom/repository 6 --du-profile -s \ 7 --generate-imageset 8 1 Specifies the downloading function of the factory-precaching-cli tool. 2 Defines the OpenShift Container Platform release version. 3 Defines the RHACM version. 4 Defines the multicluster engine version. 5 Defines the folder where you want to download the images on the disk. 6 Optional. Defines the repository where you store your additional images. These images are downloaded and pre-cached on the disk. 7 Specifies pre-caching the Operators included in the DU configuration. 8 The --generate-imageset argument generates the ImageSetConfiguration CR only, which allows you to customize the CR. Example output Generated /mnt/imageset.yaml Example ImageSetConfiguration CR apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration mirror: platform: channels: - name: stable-4.18 minVersion: 4.18.0 1 maxVersion: 4.18.0 additionalImages: - name: quay.io/custom/repository operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.18 packages: - name: advanced-cluster-management 2 channels: - name: 'release-2.6' minVersion: 2.6.3 maxVersion: 2.6.3 - name: multicluster-engine 3 channels: - name: 'stable-2.1' minVersion: 2.1.4 maxVersion: 2.1.4 - name: local-storage-operator 4 channels: - name: 'stable' - name: ptp-operator 5 channels: - name: 'stable' - name: sriov-network-operator 6 channels: - name: 'stable' - name: cluster-logging 7 channels: - name: 'stable' - name: lvms-operator 8 channels: - name: 'stable-4.18' - name: amq7-interconnect-operator 9 channels: - name: '1.10.x' - name: bare-metal-event-relay 10 channels: - name: 'stable' - catalog: registry.redhat.io/redhat/certified-operator-index:v4.18 packages: - name: sriov-fec 11 channels: - name: 'stable' 1 The platform versions match the versions passed to the tool. 2 3 The versions of RHACM and the multicluster engine Operator match the versions passed to the tool. 4 5 6 7 8 9 10 11 The CR contains all the specified DU Operators. Customize the catalog resource in the CR: apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration mirror: platform: [...] operators: - catalog: eko4.cloud.lab.eng.bos.redhat.com:8443/redhat/certified-operator-index:v4.18 packages: - name: sriov-fec channels: - name: 'stable' When you download images by using a local or disconnected registry, you have to first add certificates for the registries that you want to pull the content from. To avoid any errors, copy the registry certificate into your server: # cp /tmp/eko4-ca.crt /etc/pki/ca-trust/source/anchors/. Then, update the certificates trust store: # update-ca-trust Mount the host /etc/pki folder into the factory-cli image: # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker -v /etc/pki:/etc/pki --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- \ factory-precaching-cli download \ 1 -r 4.18.0 \ 2 --acm-version 2.6.3 \ 3 --mce-version 2.1.4 \ 4 -f /mnt \ 5 --img quay.io/custom/repository 6 --du-profile -s \ 7 --skip-imageset 8 1 Specifies the downloading function of the factory-precaching-cli tool. 2 Defines the OpenShift Container Platform release version. 3 Defines the RHACM version. 4 Defines the multicluster engine version. 5 Defines the folder where you want to download the images on the disk. 6 Optional. Defines the repository where you store your additional images. These images are downloaded and pre-cached on the disk. 7 Specifies pre-caching the Operators included in the DU configuration. 8 The --skip-imageset argument allows you to download the images that you specified in your customized ImageSetConfiguration CR. Download the images without generating a new imageSetConfiguration CR: # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download -r 4.18.0 \ --acm-version 2.6.3 --mce-version 2.1.4 -f /mnt \ --img quay.io/custom/repository \ --du-profile -s \ --skip-imageset Additional resources To access the online Red Hat registries, see OpenShift installation customization tools . For more information about using the multicluster engine, see About cluster lifecycle with the multicluster engine operator . 14.5. Pre-caching images in GitOps ZTP The SiteConfig manifest defines how an OpenShift cluster is to be installed and configured. In the GitOps Zero Touch Provisioning (ZTP) provisioning workflow, the factory-precaching-cli tool requires the following additional fields in the SiteConfig manifest: clusters.ignitionConfigOverride nodes.installerArgs nodes.ignitionConfigOverride Important SiteConfig v1 is deprecated starting with OpenShift Container Platform version 4.18. Equivalent and improved functionality is now available through the SiteConfig Operator using the ClusterInstance custom resource. For more information, see Procedure to transition from SiteConfig CRs to the ClusterInstance API . For more information about the SiteConfig Operator, see SiteConfig . Example SiteConfig with additional fields apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "example-5g-lab" namespace: "example-5g-lab" spec: baseDomain: "example.domain.redhat.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "img4.9.10-x86-64-appsub" 1 sshPublicKey: "ssh-rsa ..." clusters: - clusterName: "sno-worker-0" clusterImageSetNameRef: "eko4-img4.11.5-x86-64-appsub" 2 clusterLabels: group-du-sno: "" common-411: true sites : "example-5g-lab" vendor: "OpenShift" clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.19.32.192/26 serviceNetwork: - 172.30.0.0/16 networkType: "OVNKubernetes" additionalNTPSources: - clock.corp.redhat.com ignitionConfigOverride: '{ "ignition": { "version": "3.1.0" }, "systemd": { "units": [ { "name": "var-mnt.mount", "enabled": true, "contents": "[Unit]\nDescription=Mount partition with artifacts\nBefore=precache-images.service\nBindsTo=precache-images.service\nStopWhenUnneeded=true\n\n[Mount]\nWhat=/dev/disk/by-partlabel/data\nWhere=/var/mnt\nType=xfs\nTimeoutSec=30\n\n[Install]\nRequiredBy=precache-images.service" }, { "name": "precache-images.service", "enabled": true, "contents": "[Unit]\nDescription=Extracts the precached images in discovery stage\nAfter=var-mnt.mount\nBefore=agent.service\n\n[Service]\nType=oneshot\nUser=root\nWorkingDirectory=/var/mnt\nExecStart=bash /usr/local/bin/extract-ai.sh\n#TimeoutStopSec=30\n\n[Install]\nWantedBy=multi-user.target default.target\nWantedBy=agent.service" } ] }, "storage": { "files": [ { "overwrite": true, "path": "/usr/local/bin/extract-ai.sh", "mode": 755, "user": { "name": "root" }, "contents": { "source": "data:,%23%21%2Fbin%2Fbash%0A%0AFOLDER%3D%22%24%7BFOLDER%3A-%24%28pwd%29%7D%22%0AOCP_RELEASE_LIST%3D%22%24%7BOCP_RELEASE_LIST%3A-ai-images.txt%7D%22%0ABINARY_FOLDER%3D%2Fvar%2Fmnt%0A%0Apushd%20%24FOLDER%0A%0Atotal_copies%3D%24%28sort%20-u%20%24BINARY_FOLDER%2F%24OCP_RELEASE_LIST%20%7C%20wc%20-l%29%20%20%23%20Required%20to%20keep%20track%20of%20the%20pull%20task%20vs%20total%0Acurrent_copy%3D1%0A%0Awhile%20read%20-r%20line%3B%0Ado%0A%20%20uri%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%241%7D%27%29%0A%20%20%23tar%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%242%7D%27%29%0A%20%20podman%20image%20exists%20%24uri%0A%20%20if%20%5B%5B%20%24%3F%20-eq%200%20%5D%5D%3B%20then%0A%20%20%20%20%20%20echo%20%22Skipping%20existing%20image%20%24tar%22%0A%20%20%20%20%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20%20%20%20%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%0A%20%20%20%20%20%20continue%0A%20%20fi%0A%20%20tar%3D%24%28echo%20%22%24uri%22%20%7C%20%20rev%20%7C%20cut%20-d%20%22%2F%22%20-f1%20%7C%20rev%20%7C%20tr%20%22%3A%22%20%22_%22%29%0A%20%20tar%20zxvf%20%24%7Btar%7D.tgz%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-f%20%24%7Btar%7D.gz%3B%20fi%0A%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20skopeo%20copy%20dir%3A%2F%2F%24%28pwd%29%2F%24%7Btar%7D%20containers-storage%3A%24%7Buri%7D%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-rf%20%24%7Btar%7D%3B%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%3B%20fi%0Adone%20%3C%20%24%7BBINARY_FOLDER%7D%2F%24%7BOCP_RELEASE_LIST%7D%0A%0A%23%20workaround%20while%20https%3A%2F%2Fgithub.com%2Fopenshift%2Fassisted-service%2Fpull%2F3546%0A%23cp%20%2Fvar%2Fmnt%2Fmodified-rhcos-4.10.3-x86_64-metal.x86_64.raw.gz%20%2Fvar%2Ftmp%2F.%0A%0Aexit%200" } }, { "overwrite": true, "path": "/usr/local/bin/agent-fix-bz1964591", "mode": 755, "user": { "name": "root" }, "contents": { "source": "data:,%23%21%2Fusr%2Fbin%2Fsh%0A%0A%23%20This%20script%20is%20a%20workaround%20for%20bugzilla%201964591%20where%20symlinks%20inside%20%2Fvar%2Flib%2Fcontainers%2F%20get%0A%23%20corrupted%20under%20some%20circumstances.%0A%23%0A%23%20In%20order%20to%20let%20agent.service%20start%20correctly%20we%20are%20checking%20here%20whether%20the%20requested%0A%23%20container%20image%20exists%20and%20in%20case%20%22podman%20images%22%20returns%20an%20error%20we%20try%20removing%20the%20faulty%0A%23%20image.%0A%23%0A%23%20In%20such%20a%20scenario%20agent.service%20will%20detect%20the%20image%20is%20not%20present%20and%20pull%20it%20again.%20In%20case%0A%23%20the%20image%20is%20present%20and%20can%20be%20detected%20correctly%2C%20no%20any%20action%20is%20required.%0A%0AIMAGE%3D%24%28echo%20%241%20%7C%20sed%20%27s%2F%3A.%2A%2F%2F%27%29%0Apodman%20image%20exists%20%24IMAGE%20%7C%7C%20echo%20%22already%20loaded%22%20%7C%7C%20echo%20%22need%20to%20be%20pulled%22%0A%23podman%20images%20%7C%20grep%20%24IMAGE%20%7C%7C%20podman%20rmi%20--force%20%241%20%7C%7C%20true" } } ] } }' nodes: - hostName: "snonode.sno-worker-0.example.domain.redhat.com" role: "master" bmcAddress: "idrac-virtualmedia+https://10.19.28.53/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "worker0-bmh-secret" bootMACAddress: "e4:43:4b:bd:90:46" bootMode: "UEFI" rootDeviceHints: deviceName: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0 installerArgs: '["--save-partlabel", "data"]' ignitionConfigOverride: | { "ignition": { "version": "3.1.0" }, "systemd": { "units": [ { "name": "var-mnt.mount", "enabled": true, "contents": "[Unit]\nDescription=Mount partition with artifacts\nBefore=precache-ocp-images.service\nBindsTo=precache-ocp-images.service\nStopWhenUnneeded=true\n\n[Mount]\nWhat=/dev/disk/by-partlabel/data\nWhere=/var/mnt\nType=xfs\nTimeoutSec=30\n\n[Install]\nRequiredBy=precache-ocp-images.service" }, { "name": "precache-ocp-images.service", "enabled": true, "contents": "[Unit]\nDescription=Extracts the precached OCP images into containers storage\nAfter=var-mnt.mount\nBefore=machine-config-daemon-pull.service nodeip-configuration.service\n\n[Service]\nType=oneshot\nUser=root\nWorkingDirectory=/var/mnt\nExecStart=bash /usr/local/bin/extract-ocp.sh\nTimeoutStopSec=60\n\n[Install]\nWantedBy=multi-user.target" } ] }, "storage": { "files": [ { "overwrite": true, "path": "/usr/local/bin/extract-ocp.sh", "mode": 755, "user": { "name": "root" }, "contents": { "source": "data:,%23%21%2Fbin%2Fbash%0A%0AFOLDER%3D%22%24%7BFOLDER%3A-%24%28pwd%29%7D%22%0AOCP_RELEASE_LIST%3D%22%24%7BOCP_RELEASE_LIST%3A-ocp-images.txt%7D%22%0ABINARY_FOLDER%3D%2Fvar%2Fmnt%0A%0Apushd%20%24FOLDER%0A%0Atotal_copies%3D%24%28sort%20-u%20%24BINARY_FOLDER%2F%24OCP_RELEASE_LIST%20%7C%20wc%20-l%29%20%20%23%20Required%20to%20keep%20track%20of%20the%20pull%20task%20vs%20total%0Acurrent_copy%3D1%0A%0Awhile%20read%20-r%20line%3B%0Ado%0A%20%20uri%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%241%7D%27%29%0A%20%20%23tar%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%242%7D%27%29%0A%20%20podman%20image%20exists%20%24uri%0A%20%20if%20%5B%5B%20%24%3F%20-eq%200%20%5D%5D%3B%20then%0A%20%20%20%20%20%20echo%20%22Skipping%20existing%20image%20%24tar%22%0A%20%20%20%20%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20%20%20%20%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%0A%20%20%20%20%20%20continue%0A%20%20fi%0A%20%20tar%3D%24%28echo%20%22%24uri%22%20%7C%20%20rev%20%7C%20cut%20-d%20%22%2F%22%20-f1%20%7C%20rev%20%7C%20tr%20%22%3A%22%20%22_%22%29%0A%20%20tar%20zxvf%20%24%7Btar%7D.tgz%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-f%20%24%7Btar%7D.gz%3B%20fi%0A%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20skopeo%20copy%20dir%3A%2F%2F%24%28pwd%29%2F%24%7Btar%7D%20containers-storage%3A%24%7Buri%7D%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-rf%20%24%7Btar%7D%3B%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%3B%20fi%0Adone%20%3C%20%24%7BBINARY_FOLDER%7D%2F%24%7BOCP_RELEASE_LIST%7D%0A%0Aexit%200" } } ] } } nodeNetwork: config: interfaces: - name: ens1f0 type: ethernet state: up macAddress: "AA:BB:CC:11:22:33" ipv4: enabled: true dhcp: true ipv6: enabled: false interfaces: - name: "ens1f0" macAddress: "AA:BB:CC:11:22:33" 1 Specifies the cluster image set used for deployment, unless you specify a different image set in the spec.clusters.clusterImageSetNameRef field. 2 Specifies the cluster image set used to deploy an individual cluster. If defined, it overrides the spec.clusterImageSetNameRef at the site level. 14.5.1. Understanding the clusters.ignitionConfigOverride field The clusters.ignitionConfigOverride field adds a configuration in Ignition format during the GitOps ZTP discovery stage. The configuration includes systemd services in the ISO mounted in virtual media. This way, the scripts are part of the discovery RHCOS live ISO and they can be used to load the Assisted Installer (AI) images. systemd services The systemd services are var-mnt.mount and precache-images.services . The precache-images.service depends on the disk partition to be mounted in /var/mnt by the var-mnt.mount unit. The service calls a script called extract-ai.sh . extract-ai.sh The extract-ai.sh script extracts and loads the required images from the disk partition to the local container storage. When the script finishes successfully, you can use the images locally. agent-fix-bz1964591 The agent-fix-bz1964591 script is a workaround for an AI issue. To prevent AI from removing the images, which can force the agent.service to pull the images again from the registry, the agent-fix-bz1964591 script checks if the requested container images exist. 14.5.2. Understanding the nodes.installerArgs field The nodes.installerArgs field allows you to configure how the coreos-installer utility writes the RHCOS live ISO to disk. You need to indicate to save the disk partition labeled as data because the artifacts saved in the data partition are needed during the OpenShift Container Platform installation stage. The extra parameters are passed directly to the coreos-installer utility that writes the live RHCOS to disk. On the reboot, the operating system starts from the disk. You can pass several options to the coreos-installer utility: OPTIONS: ... -u, --image-url <URL> Manually specify the image URL -f, --image-file <path> Manually specify a local image file -i, --ignition-file <path> Embed an Ignition config from a file -I, --ignition-url <URL> Embed an Ignition config from a URL ... --save-partlabel <lx>... Save partitions with this label glob --save-partindex <id>... Save partitions with this number or range ... --insecure-ignition Allow Ignition URL without HTTPS or hash 14.5.3. Understanding the nodes.ignitionConfigOverride field Similarly to clusters.ignitionConfigOverride , the nodes.ignitionConfigOverride field allows the addition of configurations in Ignition format to the coreos-installer utility, but at the OpenShift Container Platform installation stage. When the RHCOS is written to disk, the extra configuration included in the GitOps ZTP discovery ISO is no longer available. During the discovery stage, the extra configuration is stored in the memory of the live OS. Note At this stage, the number of container images extracted and loaded is bigger than in the discovery stage. Depending on the OpenShift Container Platform release and whether you install the Day-2 Operators, the installation time can vary. At the installation stage, the var-mnt.mount and precache-ocp.services systemd services are used. precache-ocp.service The precache-ocp.service depends on the disk partition to be mounted in /var/mnt by the var-mnt.mount unit. The precache-ocp.service service calls a script called extract-ocp.sh . Important To extract all the images before the OpenShift Container Platform installation, you must execute precache-ocp.service before executing the machine-config-daemon-pull.service and nodeip-configuration.service services. extract-ocp.sh The extract-ocp.sh script extracts and loads the required images from the disk partition to the local container storage. When you commit the SiteConfig and optional PolicyGenerator or PolicyGenTemplate custom resources (CRs) to the Git repo that Argo CD is monitoring, you can start the GitOps ZTP workflow by syncing the CRs with the hub cluster. 14.6. Troubleshooting a "Rendered catalog is invalid" error When you download images by using a local or disconnected registry, you might see the The rendered catalog is invalid error. This means that you are missing certificates of the new registry you want to pull content from. Note The factory-precaching-cli tool image is built on a UBI RHEL image. Certificate paths and locations are the same on RHCOS. Example error Generating list of pre-cached artifacts... error: unable to run command oc-mirror -c /mnt/imageset.yaml file:///tmp/fp-cli-3218002584/mirror --ignore-history --dry-run: Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/publish Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/v2 Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/charts Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/release-signatures backend is not configured in /mnt/imageset.yaml, using stateless mode backend is not configured in /mnt/imageset.yaml, using stateless mode No metadata detected, creating new workspace level=info msg=trying host error=failed to do request: Head "https://eko4.cloud.lab.eng.bos.redhat.com:8443/v2/redhat/redhat-operator-index/manifests/v4.11": x509: certificate signed by unknown authority host=eko4.cloud.lab.eng.bos.redhat.com:8443 The rendered catalog is invalid. Run "oc-mirror list operators --catalog CATALOG-NAME --package PACKAGE-NAME" for more information. error: error rendering new refs: render reference "eko4.cloud.lab.eng.bos.redhat.com:8443/redhat/redhat-operator-index:v4.11": error resolving name : failed to do request: Head "https://eko4.cloud.lab.eng.bos.redhat.com:8443/v2/redhat/redhat-operator-index/manifests/v4.11": x509: certificate signed by unknown authority Procedure Copy the registry certificate into your server: # cp /tmp/eko4-ca.crt /etc/pki/ca-trust/source/anchors/. Update the certificates truststore: # update-ca-trust Mount the host /etc/pki folder into the factory-cli image: # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker -v /etc/pki:/etc/pki --privileged -it --rm quay.io/openshift-kni/telco-ran-tools:latest -- \ factory-precaching-cli download -r 4.18.0 --acm-version 2.5.4 \ --mce-version 2.0.4 -f /mnt \--img quay.io/custom/repository --du-profile -s --skip-imageset | [
"podman pull quay.io/openshift-kni/telco-ran-tools:latest",
"podman run quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli -v",
"factory-precaching-cli version 20221018.120852+main.feecf17",
"curl --globoff -H \"Content-Type: application/json\" -H \"Accept: application/json\" -k -X GET --user USD{username_password} https://USDBMC_ADDRESS/redfish/v1/Managers/Self/VirtualMedia/1 | python -m json.tool",
"curl --globoff -L -w \"%{http_code} %{url_effective}\\\\n\" -ku USD{username_password} -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{\"Image\": \"http://[USDHTTPd_IP]/RHCOS-live.iso\"}' -X POST https://USDBMC_ADDRESS/redfish/v1/Managers/Self/VirtualMedia/1/Actions/VirtualMedia.InsertMedia",
"curl --globoff -L -w \"%{http_code} %{url_effective}\\\\n\" -ku USD{username_password} -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{\"Boot\":{ \"BootSourceOverrideEnabled\": \"Once\", \"BootSourceOverrideTarget\": \"Cd\", \"BootSourceOverrideMode\": \"UEFI\"}}' -X PATCH https://USDBMC_ADDRESS/redfish/v1/Systems/Self",
"lsblk",
"NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk",
"wipefs -a /dev/nvme0n1",
"/dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa",
"podman run -v /dev:/dev --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli partition \\ 1 -d /dev/nvme0n1 \\ 2 -s 250 3",
"lsblk",
"NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk ββnvme0n1p1 259:3 0 250G 0 part",
"gdisk -l /dev/nvme0n1",
"GPT fdisk (gdisk) version 1.0.3 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/nvme0n1: 3125627568 sectors, 1.5 TiB Model: Dell Express Flash PM1725b 1.6TB SFF Sector size (logical/physical): 512/512 bytes Disk identifier (GUID): CB5A9D44-9B3C-4174-A5C1-C64957910B61 Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 3125627534 Partitions will be aligned on 2048-sector boundaries Total free space is 2601338846 sectors (1.2 TiB) Number Start (sector) End (sector) Size Code Name 1 2601338880 3125627534 250.0 GiB 8300 data",
"lsblk -f /dev/nvme0n1",
"NAME FSTYPE LABEL UUID MOUNTPOINT nvme0n1 ββnvme0n1p1 xfs 1bee8ea4-d6cf-4339-b690-a76594794071",
"mount /dev/nvme0n1p1 /mnt/",
"lsblk",
"NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 93.8G 0 loop /run/ephemeral loop1 7:1 0 897.3M 1 loop /sysroot sr0 11:0 1 999M 0 rom /run/media/iso nvme0n1 259:1 0 1.5T 0 disk ββnvme0n1p1 259:2 0 250G 0 part /var/mnt 1",
"taskset 0xffffffff podman run --rm quay.io/openshift-kni/telco-ran-tools:latest factory-precaching-cli download --help",
"oc get csv -A | grep -i advanced-cluster-management",
"open-cluster-management advanced-cluster-management.v2.6.3 Advanced Cluster Management for Kubernetes 2.6.3 advanced-cluster-management.v2.6.3 Succeeded",
"oc get csv -A | grep -i multicluster-engine",
"multicluster-engine cluster-group-upgrades-operator.v0.0.3 cluster-group-upgrades-operator 0.0.3 Pending multicluster-engine multicluster-engine.v2.1.4 multicluster engine for Kubernetes 2.1.4 multicluster-engine.v2.0.3 Succeeded multicluster-engine openshift-gitops-operator.v1.5.7 Red Hat OpenShift GitOps 1.5.7 openshift-gitops-operator.v1.5.6-0.1664915551.p Succeeded multicluster-engine openshift-pipelines-operator-rh.v1.6.4 Red Hat OpenShift Pipelines 1.6.4 openshift-pipelines-operator-rh.v1.6.3 Succeeded",
"mkdir /root/.docker",
"cp config.json /root/.docker/config.json 1",
"podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools -- factory-precaching-cli download \\ 1 -r 4.18.0 \\ 2 --acm-version 2.6.3 \\ 3 --mce-version 2.1.4 \\ 4 -f /mnt \\ 5 --img quay.io/custom/repository 6",
"Generated /mnt/imageset.yaml Generating list of pre-cached artifacts Processing artifact [1/176]: ocp-v4.0-art-dev@sha256_6ac2b96bf4899c01a87366fd0feae9f57b1b61878e3b5823da0c3f34f707fbf5 Processing artifact [2/176]: ocp-v4.0-art-dev@sha256_f48b68d5960ba903a0d018a10544ae08db5802e21c2fa5615a14fc58b1c1657c Processing artifact [3/176]: ocp-v4.0-art-dev@sha256_a480390e91b1c07e10091c3da2257180654f6b2a735a4ad4c3b69dbdb77bbc06 Processing artifact [4/176]: ocp-v4.0-art-dev@sha256_ecc5d8dbd77e326dba6594ff8c2d091eefbc4d90c963a9a85b0b2f0e6155f995 Processing artifact [5/176]: ocp-v4.0-art-dev@sha256_274b6d561558a2f54db08ea96df9892315bb773fc203b1dbcea418d20f4c7ad1 Processing artifact [6/176]: ocp-v4.0-art-dev@sha256_e142bf5020f5ca0d1bdda0026bf97f89b72d21a97c9cc2dc71bf85050e822bbf Processing artifact [175/176]: ocp-v4.0-art-dev@sha256_16cd7eda26f0fb0fc965a589e1e96ff8577e560fcd14f06b5fda1643036ed6c8 Processing artifact [176/176]: ocp-v4.0-art-dev@sha256_cf4d862b4a4170d4f611b39d06c31c97658e309724f9788e155999ae51e7188f Summary: Release: 4.18.0 Hub Version: 2.6.3 ACM Version: 2.6.3 MCE Version: 2.1.4 Include DU Profile: No Workers: 83",
"ls -l /mnt 1",
"-rw-r--r--. 1 root root 136352323 Oct 31 15:19 ocp-v4.0-art-dev@sha256_edec37e7cd8b1611d0031d45e7958361c65e2005f145b471a8108f1b54316c07.tgz -rw-r--r--. 1 root root 156092894 Oct 31 15:33 ocp-v4.0-art-dev@sha256_ee51b062b9c3c9f4fe77bd5b3cc9a3b12355d040119a1434425a824f137c61a9.tgz -rw-r--r--. 1 root root 172297800 Oct 31 15:29 ocp-v4.0-art-dev@sha256_ef23d9057c367a36e4a5c4877d23ee097a731e1186ed28a26c8d21501cd82718.tgz -rw-r--r--. 1 root root 171539614 Oct 31 15:23 ocp-v4.0-art-dev@sha256_f0497bb63ef6834a619d4208be9da459510df697596b891c0c633da144dbb025.tgz -rw-r--r--. 1 root root 160399150 Oct 31 15:20 ocp-v4.0-art-dev@sha256_f0c339da117cde44c9aae8d0bd054bceb6f19fdb191928f6912a703182330ac2.tgz -rw-r--r--. 1 root root 175962005 Oct 31 15:17 ocp-v4.0-art-dev@sha256_f19dd2e80fb41ef31d62bb8c08b339c50d193fdb10fc39cc15b353cbbfeb9b24.tgz -rw-r--r--. 1 root root 174942008 Oct 31 15:33 ocp-v4.0-art-dev@sha256_f1dbb81fa1aa724e96dd2b296b855ff52a565fbef003d08030d63590ae6454df.tgz -rw-r--r--. 1 root root 246693315 Oct 31 15:31 ocp-v4.0-art-dev@sha256_f44dcf2c94e4fd843cbbf9b11128df2ba856cd813786e42e3da1fdfb0f6ddd01.tgz -rw-r--r--. 1 root root 170148293 Oct 31 15:00 ocp-v4.0-art-dev@sha256_f48b68d5960ba903a0d018a10544ae08db5802e21c2fa5615a14fc58b1c1657c.tgz -rw-r--r--. 1 root root 168899617 Oct 31 15:16 ocp-v4.0-art-dev@sha256_f5099b0989120a8d08a963601214b5c5cb23417a707a8624b7eb52ab788a7f75.tgz -rw-r--r--. 1 root root 176592362 Oct 31 15:05 ocp-v4.0-art-dev@sha256_f68c0e6f5e17b0b0f7ab2d4c39559ea89f900751e64b97cb42311a478338d9c3.tgz -rw-r--r--. 1 root root 157937478 Oct 31 15:37 ocp-v4.0-art-dev@sha256_f7ba33a6a9db9cfc4b0ab0f368569e19b9fa08f4c01a0d5f6a243d61ab781bd8.tgz -rw-r--r--. 1 root root 145535253 Oct 31 15:26 ocp-v4.0-art-dev@sha256_f8f098911d670287826e9499806553f7a1dd3e2b5332abbec740008c36e84de5.tgz -rw-r--r--. 1 root root 158048761 Oct 31 15:40 ocp-v4.0-art-dev@sha256_f914228ddbb99120986262168a705903a9f49724ffa958bb4bf12b2ec1d7fb47.tgz -rw-r--r--. 1 root root 167914526 Oct 31 15:37 ocp-v4.0-art-dev@sha256_fa3ca9401c7a9efda0502240aeb8d3ae2d239d38890454f17fe5158b62305010.tgz -rw-r--r--. 1 root root 164432422 Oct 31 15:24 ocp-v4.0-art-dev@sha256_fc4783b446c70df30b3120685254b40ce13ba6a2b0bf8fb1645f116cf6a392f1.tgz -rw-r--r--. 1 root root 306643814 Oct 31 15:11 troubleshoot@sha256_b86b8aea29a818a9c22944fd18243fa0347c7a2bf1ad8864113ff2bb2d8e0726.tgz",
"podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \\ 1 -r 4.18.0 \\ 2 --acm-version 2.6.3 \\ 3 --mce-version 2.1.4 \\ 4 -f /mnt \\ 5 --img quay.io/custom/repository 6 --du-profile -s 7",
"Generated /mnt/imageset.yaml Generating list of pre-cached artifacts Processing artifact [1/379]: ocp-v4.0-art-dev@sha256_7753a8d9dd5974be8c90649aadd7c914a3d8a1f1e016774c7ac7c9422e9f9958 Processing artifact [2/379]: ose-kube-rbac-proxy@sha256_c27a7c01e5968aff16b6bb6670423f992d1a1de1a16e7e260d12908d3322431c Processing artifact [3/379]: ocp-v4.0-art-dev@sha256_370e47a14c798ca3f8707a38b28cfc28114f492bb35fe1112e55d1eb51022c99 Processing artifact [378/379]: ose-local-storage-operator@sha256_0c81c2b79f79307305e51ce9d3837657cf9ba5866194e464b4d1b299f85034d0 Processing artifact [379/379]: multicluster-operators-channel-rhel8@sha256_c10f6bbb84fe36e05816e873a72188018856ad6aac6cc16271a1b3966f73ceb3 Summary: Release: 4.18.0 Hub Version: 2.6.3 ACM Version: 2.6.3 MCE Version: 2.1.4 Include DU Profile: Yes Workers: 83",
"podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \\ 1 -r 4.18.0 \\ 2 --acm-version 2.6.3 \\ 3 --mce-version 2.1.4 \\ 4 -f /mnt \\ 5 --img quay.io/custom/repository 6 --du-profile -s \\ 7 --generate-imageset 8",
"Generated /mnt/imageset.yaml",
"apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration mirror: platform: channels: - name: stable-4.18 minVersion: 4.18.0 1 maxVersion: 4.18.0 additionalImages: - name: quay.io/custom/repository operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.18 packages: - name: advanced-cluster-management 2 channels: - name: 'release-2.6' minVersion: 2.6.3 maxVersion: 2.6.3 - name: multicluster-engine 3 channels: - name: 'stable-2.1' minVersion: 2.1.4 maxVersion: 2.1.4 - name: local-storage-operator 4 channels: - name: 'stable' - name: ptp-operator 5 channels: - name: 'stable' - name: sriov-network-operator 6 channels: - name: 'stable' - name: cluster-logging 7 channels: - name: 'stable' - name: lvms-operator 8 channels: - name: 'stable-4.18' - name: amq7-interconnect-operator 9 channels: - name: '1.10.x' - name: bare-metal-event-relay 10 channels: - name: 'stable' - catalog: registry.redhat.io/redhat/certified-operator-index:v4.18 packages: - name: sriov-fec 11 channels: - name: 'stable'",
"apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration mirror: platform: [...] operators: - catalog: eko4.cloud.lab.eng.bos.redhat.com:8443/redhat/certified-operator-index:v4.18 packages: - name: sriov-fec channels: - name: 'stable'",
"cp /tmp/eko4-ca.crt /etc/pki/ca-trust/source/anchors/.",
"update-ca-trust",
"podman run -v /mnt:/mnt -v /root/.docker:/root/.docker -v /etc/pki:/etc/pki --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \\ 1 -r 4.18.0 \\ 2 --acm-version 2.6.3 \\ 3 --mce-version 2.1.4 \\ 4 -f /mnt \\ 5 --img quay.io/custom/repository 6 --du-profile -s \\ 7 --skip-imageset 8",
"podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download -r 4.18.0 --acm-version 2.6.3 --mce-version 2.1.4 -f /mnt --img quay.io/custom/repository --du-profile -s --skip-imageset",
"apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"example-5g-lab\" namespace: \"example-5g-lab\" spec: baseDomain: \"example.domain.redhat.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"img4.9.10-x86-64-appsub\" 1 sshPublicKey: \"ssh-rsa ...\" clusters: - clusterName: \"sno-worker-0\" clusterImageSetNameRef: \"eko4-img4.11.5-x86-64-appsub\" 2 clusterLabels: group-du-sno: \"\" common-411: true sites : \"example-5g-lab\" vendor: \"OpenShift\" clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.19.32.192/26 serviceNetwork: - 172.30.0.0/16 networkType: \"OVNKubernetes\" additionalNTPSources: - clock.corp.redhat.com ignitionConfigOverride: '{ \"ignition\": { \"version\": \"3.1.0\" }, \"systemd\": { \"units\": [ { \"name\": \"var-mnt.mount\", \"enabled\": true, \"contents\": \"[Unit]\\nDescription=Mount partition with artifacts\\nBefore=precache-images.service\\nBindsTo=precache-images.service\\nStopWhenUnneeded=true\\n\\n[Mount]\\nWhat=/dev/disk/by-partlabel/data\\nWhere=/var/mnt\\nType=xfs\\nTimeoutSec=30\\n\\n[Install]\\nRequiredBy=precache-images.service\" }, { \"name\": \"precache-images.service\", \"enabled\": true, \"contents\": \"[Unit]\\nDescription=Extracts the precached images in discovery stage\\nAfter=var-mnt.mount\\nBefore=agent.service\\n\\n[Service]\\nType=oneshot\\nUser=root\\nWorkingDirectory=/var/mnt\\nExecStart=bash /usr/local/bin/extract-ai.sh\\n#TimeoutStopSec=30\\n\\n[Install]\\nWantedBy=multi-user.target default.target\\nWantedBy=agent.service\" } ] }, \"storage\": { \"files\": [ { \"overwrite\": true, \"path\": \"/usr/local/bin/extract-ai.sh\", \"mode\": 755, \"user\": { \"name\": \"root\" }, \"contents\": { \"source\": \"data:,%23%21%2Fbin%2Fbash%0A%0AFOLDER%3D%22%24%7BFOLDER%3A-%24%28pwd%29%7D%22%0AOCP_RELEASE_LIST%3D%22%24%7BOCP_RELEASE_LIST%3A-ai-images.txt%7D%22%0ABINARY_FOLDER%3D%2Fvar%2Fmnt%0A%0Apushd%20%24FOLDER%0A%0Atotal_copies%3D%24%28sort%20-u%20%24BINARY_FOLDER%2F%24OCP_RELEASE_LIST%20%7C%20wc%20-l%29%20%20%23%20Required%20to%20keep%20track%20of%20the%20pull%20task%20vs%20total%0Acurrent_copy%3D1%0A%0Awhile%20read%20-r%20line%3B%0Ado%0A%20%20uri%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%241%7D%27%29%0A%20%20%23tar%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%242%7D%27%29%0A%20%20podman%20image%20exists%20%24uri%0A%20%20if%20%5B%5B%20%24%3F%20-eq%200%20%5D%5D%3B%20then%0A%20%20%20%20%20%20echo%20%22Skipping%20existing%20image%20%24tar%22%0A%20%20%20%20%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20%20%20%20%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%0A%20%20%20%20%20%20continue%0A%20%20fi%0A%20%20tar%3D%24%28echo%20%22%24uri%22%20%7C%20%20rev%20%7C%20cut%20-d%20%22%2F%22%20-f1%20%7C%20rev%20%7C%20tr%20%22%3A%22%20%22_%22%29%0A%20%20tar%20zxvf%20%24%7Btar%7D.tgz%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-f%20%24%7Btar%7D.gz%3B%20fi%0A%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20skopeo%20copy%20dir%3A%2F%2F%24%28pwd%29%2F%24%7Btar%7D%20containers-storage%3A%24%7Buri%7D%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-rf%20%24%7Btar%7D%3B%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%3B%20fi%0Adone%20%3C%20%24%7BBINARY_FOLDER%7D%2F%24%7BOCP_RELEASE_LIST%7D%0A%0A%23%20workaround%20while%20https%3A%2F%2Fgithub.com%2Fopenshift%2Fassisted-service%2Fpull%2F3546%0A%23cp%20%2Fvar%2Fmnt%2Fmodified-rhcos-4.10.3-x86_64-metal.x86_64.raw.gz%20%2Fvar%2Ftmp%2F.%0A%0Aexit%200\" } }, { \"overwrite\": true, \"path\": \"/usr/local/bin/agent-fix-bz1964591\", \"mode\": 755, \"user\": { \"name\": \"root\" }, \"contents\": { \"source\": \"data:,%23%21%2Fusr%2Fbin%2Fsh%0A%0A%23%20This%20script%20is%20a%20workaround%20for%20bugzilla%201964591%20where%20symlinks%20inside%20%2Fvar%2Flib%2Fcontainers%2F%20get%0A%23%20corrupted%20under%20some%20circumstances.%0A%23%0A%23%20In%20order%20to%20let%20agent.service%20start%20correctly%20we%20are%20checking%20here%20whether%20the%20requested%0A%23%20container%20image%20exists%20and%20in%20case%20%22podman%20images%22%20returns%20an%20error%20we%20try%20removing%20the%20faulty%0A%23%20image.%0A%23%0A%23%20In%20such%20a%20scenario%20agent.service%20will%20detect%20the%20image%20is%20not%20present%20and%20pull%20it%20again.%20In%20case%0A%23%20the%20image%20is%20present%20and%20can%20be%20detected%20correctly%2C%20no%20any%20action%20is%20required.%0A%0AIMAGE%3D%24%28echo%20%241%20%7C%20sed%20%27s%2F%3A.%2A%2F%2F%27%29%0Apodman%20image%20exists%20%24IMAGE%20%7C%7C%20echo%20%22already%20loaded%22%20%7C%7C%20echo%20%22need%20to%20be%20pulled%22%0A%23podman%20images%20%7C%20grep%20%24IMAGE%20%7C%7C%20podman%20rmi%20--force%20%241%20%7C%7C%20true\" } } ] } }' nodes: - hostName: \"snonode.sno-worker-0.example.domain.redhat.com\" role: \"master\" bmcAddress: \"idrac-virtualmedia+https://10.19.28.53/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"worker0-bmh-secret\" bootMACAddress: \"e4:43:4b:bd:90:46\" bootMode: \"UEFI\" rootDeviceHints: deviceName: /dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0 installerArgs: '[\"--save-partlabel\", \"data\"]' ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.1.0\" }, \"systemd\": { \"units\": [ { \"name\": \"var-mnt.mount\", \"enabled\": true, \"contents\": \"[Unit]\\nDescription=Mount partition with artifacts\\nBefore=precache-ocp-images.service\\nBindsTo=precache-ocp-images.service\\nStopWhenUnneeded=true\\n\\n[Mount]\\nWhat=/dev/disk/by-partlabel/data\\nWhere=/var/mnt\\nType=xfs\\nTimeoutSec=30\\n\\n[Install]\\nRequiredBy=precache-ocp-images.service\" }, { \"name\": \"precache-ocp-images.service\", \"enabled\": true, \"contents\": \"[Unit]\\nDescription=Extracts the precached OCP images into containers storage\\nAfter=var-mnt.mount\\nBefore=machine-config-daemon-pull.service nodeip-configuration.service\\n\\n[Service]\\nType=oneshot\\nUser=root\\nWorkingDirectory=/var/mnt\\nExecStart=bash /usr/local/bin/extract-ocp.sh\\nTimeoutStopSec=60\\n\\n[Install]\\nWantedBy=multi-user.target\" } ] }, \"storage\": { \"files\": [ { \"overwrite\": true, \"path\": \"/usr/local/bin/extract-ocp.sh\", \"mode\": 755, \"user\": { \"name\": \"root\" }, \"contents\": { \"source\": \"data:,%23%21%2Fbin%2Fbash%0A%0AFOLDER%3D%22%24%7BFOLDER%3A-%24%28pwd%29%7D%22%0AOCP_RELEASE_LIST%3D%22%24%7BOCP_RELEASE_LIST%3A-ocp-images.txt%7D%22%0ABINARY_FOLDER%3D%2Fvar%2Fmnt%0A%0Apushd%20%24FOLDER%0A%0Atotal_copies%3D%24%28sort%20-u%20%24BINARY_FOLDER%2F%24OCP_RELEASE_LIST%20%7C%20wc%20-l%29%20%20%23%20Required%20to%20keep%20track%20of%20the%20pull%20task%20vs%20total%0Acurrent_copy%3D1%0A%0Awhile%20read%20-r%20line%3B%0Ado%0A%20%20uri%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%241%7D%27%29%0A%20%20%23tar%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%242%7D%27%29%0A%20%20podman%20image%20exists%20%24uri%0A%20%20if%20%5B%5B%20%24%3F%20-eq%200%20%5D%5D%3B%20then%0A%20%20%20%20%20%20echo%20%22Skipping%20existing%20image%20%24tar%22%0A%20%20%20%20%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20%20%20%20%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%0A%20%20%20%20%20%20continue%0A%20%20fi%0A%20%20tar%3D%24%28echo%20%22%24uri%22%20%7C%20%20rev%20%7C%20cut%20-d%20%22%2F%22%20-f1%20%7C%20rev%20%7C%20tr%20%22%3A%22%20%22_%22%29%0A%20%20tar%20zxvf%20%24%7Btar%7D.tgz%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-f%20%24%7Btar%7D.gz%3B%20fi%0A%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20skopeo%20copy%20dir%3A%2F%2F%24%28pwd%29%2F%24%7Btar%7D%20containers-storage%3A%24%7Buri%7D%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-rf%20%24%7Btar%7D%3B%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%3B%20fi%0Adone%20%3C%20%24%7BBINARY_FOLDER%7D%2F%24%7BOCP_RELEASE_LIST%7D%0A%0Aexit%200\" } } ] } } nodeNetwork: config: interfaces: - name: ens1f0 type: ethernet state: up macAddress: \"AA:BB:CC:11:22:33\" ipv4: enabled: true dhcp: true ipv6: enabled: false interfaces: - name: \"ens1f0\" macAddress: \"AA:BB:CC:11:22:33\"",
"OPTIONS: -u, --image-url <URL> Manually specify the image URL -f, --image-file <path> Manually specify a local image file -i, --ignition-file <path> Embed an Ignition config from a file -I, --ignition-url <URL> Embed an Ignition config from a URL --save-partlabel <lx> Save partitions with this label glob --save-partindex <id> Save partitions with this number or range --insecure-ignition Allow Ignition URL without HTTPS or hash",
"Generating list of pre-cached artifacts error: unable to run command oc-mirror -c /mnt/imageset.yaml file:///tmp/fp-cli-3218002584/mirror --ignore-history --dry-run: Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/publish Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/v2 Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/charts Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/release-signatures backend is not configured in /mnt/imageset.yaml, using stateless mode backend is not configured in /mnt/imageset.yaml, using stateless mode No metadata detected, creating new workspace level=info msg=trying next host error=failed to do request: Head \"https://eko4.cloud.lab.eng.bos.redhat.com:8443/v2/redhat/redhat-operator-index/manifests/v4.11\": x509: certificate signed by unknown authority host=eko4.cloud.lab.eng.bos.redhat.com:8443 The rendered catalog is invalid. Run \"oc-mirror list operators --catalog CATALOG-NAME --package PACKAGE-NAME\" for more information. error: error rendering new refs: render reference \"eko4.cloud.lab.eng.bos.redhat.com:8443/redhat/redhat-operator-index:v4.11\": error resolving name : failed to do request: Head \"https://eko4.cloud.lab.eng.bos.redhat.com:8443/v2/redhat/redhat-operator-index/manifests/v4.11\": x509: certificate signed by unknown authority",
"cp /tmp/eko4-ca.crt /etc/pki/ca-trust/source/anchors/.",
"update-ca-trust",
"podman run -v /mnt:/mnt -v /root/.docker:/root/.docker -v /etc/pki:/etc/pki --privileged -it --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download -r 4.18.0 --acm-version 2.5.4 --mce-version 2.0.4 -f /mnt \\--img quay.io/custom/repository --du-profile -s --skip-imageset"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/edge_computing/ztp-pre-staging-tool |
Installing on IBM Power Virtual Server | Installing on IBM Power Virtual Server OpenShift Container Platform 4.14 Installing OpenShift Container Platform on IBM Power Virtual Server Red Hat OpenShift Documentation Team | [
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"ibmcloud plugin install cis",
"ibmcloud login",
"ibmcloud cis instance-create <instance_name> standard-next 1",
"ibmcloud cis instance-set <instance_CRN> 1",
"ibmcloud cis domain-add <domain_name> 1",
"ibmcloud resource service-instance <workspace name>",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"export IBMCLOUD_API_KEY=<api_key>",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: {} replicas: 3 controlPlane: 4 5 architecture: ppc64le hyperthreading: Enabled 6 name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: example-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 7 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id region: powervs-region zone: powervs-zone powervsResourceGroup: \"ibmcloud-resource-group\" 8 serviceInstanceID: \"powervs-region-service-instance-id\" vpcRegion : vpc-region publish: External pullSecret: '{\"auths\": ...}' 9 sshKey: ssh-ed25519 AAAA... 10",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled",
"./openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"export IBMCLOUD_API_KEY=<api_key>",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: {} replicas: 3 controlPlane: 4 5 architecture: ppc64le hyperthreading: Enabled 6 name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: example-cluster-existing-vpc networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 8 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: \"ibmcloud-resource-group\" region: powervs-region vpcRegion : vpc-region vpcName: name-of-existing-vpc 9 vpcSubnets: 10 - powervs-region-example-subnet-1 zone: powervs-zone serviceInstanceID: \"powervs-region-service-instance-id\" credentialsMode: Manual publish: External 11 pullSecret: '{\"auths\": ...}' 12 fips: false sshKey: ssh-ed25519 AAAA... 13",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled",
"./openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"export IBMCLOUD_API_KEY=<api_key>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: {} replicas: 3 controlPlane: 4 5 architecture: ppc64le hyperthreading: Enabled 6 name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: example-private-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 8 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: \"ibmcloud-resource-group\" region: powervs-region vpcName: name-of-existing-vpc 9 cloudConnectionName: powervs-region-example-cloud-con-priv vpcSubnets: - powervs-region-example-subnet-1 vpcRegion : vpc-region zone: powervs-zone serviceInstanceID: \"powervs-region-service-instance-id\" publish: Internal 10 pullSecret: '{\"auths\": ...}' 11 sshKey: ssh-ed25519 AAAA... 12",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled",
"./openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"export IBMCLOUD_API_KEY=<api_key>",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"vpcName: <existing_vpc> vpcSubnets: <vpcSubnet>",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: example-restricted-cluster-name 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 10 networkType: OVNKubernetes 11 serviceNetwork: - 192.168.0.0/24 platform: powervs: userid: ibm-user-id powervsResourceGroup: \"ibmcloud-resource-group\" 12 region: \"powervs-region\" vpcRegion: \"vpc-region\" vpcName: name-of-existing-vpc 13 vpcSubnets: 14 - name-of-existing-vpc-subnet zone: \"powervs-zone\" serviceInstanceID: \"service-instance-id\" publish: Internal credentialsMode: Manual pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: ssh-ed25519 AAAA... 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled",
"./openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"ibmcloud is volumes --resource-group-name <infrastructure_id>",
"ibmcloud is volume-delete --force <volume_id>",
"export IBMCLOUD_API_KEY=<api_key>",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"ccoctl ibmcloud delete-service-id --credentials-requests-dir <path_to_credential_requests_directory> --name <cluster_name>",
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"platform: powervs: userID:",
"platform: powervs: powervsResourceGroup:",
"platform: powervs: region:",
"platform: powervs: zone:",
"platform: powervs: serviceInstanceID:",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:",
"platform: powervs: vpcRegion:",
"platform: powervs: vpcSubnets:",
"platform: powervs: vpcName:",
"platform: powervs: cloudConnectionName:",
"platform: powervs: clusterOSImage:",
"platform: powervs: defaultMachinePlatform:",
"platform: powervs: memoryGiB:",
"platform: powervs: procType:",
"platform: powervs: processors:",
"platform: powervs: sysType:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/installing_on_ibm_power_virtual_server/index |
Chapter 3. Applying the RHEL release lock | Chapter 3. Applying the RHEL release lock SAP supports SAP HANA with certain minor RHEL releases, for example RHEL 8.2. You need to apply a release lock to make sure your RHEL system is set to a certain minor release. For more information on which minor RHEL 8 releases are supported by SAP, see the SAP note 2235581 . Important Updating your RHEL system before applying a release lock will result in dependency errors and a possible upgrade to a RHEL 8 minor version that is not supported by SAP HANA. It is advised you run yum installations and updates only after the release lock is applied. Note that if you used redhat_sap.sap_rhsm ansible role to register and subscribe your RHEL server to RHEL for SAP Solutions repositories you can skip this step and proceed to Installing RHEL System Roles for SAP . For more information, see sap_rhsm section on the Ansible Galaxy portal. Prerequisites root access Procedure Clear the dnf cache: Set the release lock: Replace 8.x with the supported minor release of RHEL 8 (for example 8.2). Additional resources How to tie a system to a specific update of RHEL | [
"rm -rf /var/cache/dnf",
"subscription-manager release --set= 8.x"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/configuring_rhel_8_for_sap_hana2_installation/proc_applying-the-rhel-release-lock_configuring-rhel-8-for-sap-hana2-installation |
3.4. Red Hat OpenStack Platform 16.2 Director Deployment Tools for RHEL 8 for Power,little endian (RPMs) | 3.4. Red Hat OpenStack Platform 16.2 Director Deployment Tools for RHEL 8 for Power,little endian (RPMs) The following table outlines the packages included in the openstack-16.2-deployment-tools-for-rhel-8-ppc64le-rpms repository. Table 3.4. Red Hat OpenStack Platform 16.2 Director Deployment Tools for RHEL 8 for Power,little endian (RPMs) Packages Name Version Advisory ansible-pacemaker 1.0.4-2.20210527194420.accaf26.el8ost.2 RHEA-2021:3483 cpp-hocon 0.1.8-3.el8ost RHEA-2021:3483 crudini 0.9-11.el8ost.1 RHEA-2021:3483 dib-utils 0.0.11-2.20210527224837.51661c3.el8ost.1 RHEA-2021:3483 facter 3.9.3-14.el8ost.1 RHEA-2021:3483 heat-cfntools 1.4.2-11.el8ost.1 RHEA-2021:3483 hiera 3.3.1-10.el8ost.1 RHEA-2021:3483 leatherman 1.4.5-6.el8ost.1 RHEA-2021:3483 openstack-heat-agents 1.10.1-2.20210528020147.96b819c.el8ost.2 RHEA-2021:3483 openstack-selinux 0.8.28-2.20210612124808.9cd3782.el8ost.1 RHEA-2021:3483 os-apply-config 10.6.0-2.20210528113933.41d86e3.el8ost.2 RHEA-2021:3483 os-collect-config 10.6.0-2.20210528112824.5b8355d.el8ost.2 RHEA-2021:3483 os-net-config 11.5.0-2.20210528113720.48c6710.el8ost.2 RHEA-2021:3483 os-refresh-config 10.4.1-2.20210528093925.d0fdb42.el8ost.2 RHEA-2021:3483 paunch-services 5.5.1-2.20210527204730.9b6bef4.el8ost.1 RHEA-2021:3483 plotnetcfg 0.4.1-14.el8ost.1 RHEA-2021:3483 puppet 5.5.10-10.el8ost.1 RHEA-2021:3483 puppet-aodh 15.5.0-2.20210601020548.09972d8.el8ost.2 RHEA-2021:3483 puppet-apache 5.1.0-2.20210528023135.1fa9b1c.el8ost.2 RHEA-2021:3483 puppet-archive 4.2.1-2.20210527171609.0538163.el8ost.2 RHEA-2021:3483 puppet-auditd 2.2.1-2.20210527172515.189b22b.el8ost.2 RHEA-2021:3483 puppet-barbican 15.5.0-2.20210601003945.6881351.el8ost.2 RHEA-2021:3483 puppet-cassandra 2.7.4-2.20210528035721.9954256.el8ost.2 RHEA-2021:3483 puppet-ceilometer 15.5.0-2.20210601004737.2f62d7f.el8ost.2 RHEA-2021:3483 puppet-ceph 3.1.2-2.20210603181657.ffa80da.el8ost.1 RHEA-2021:3483 puppet-certmonger 2.7.0-2.20210528094925.b2f2d23.el8ost.1 RHEA-2021:3483 puppet-cinder 15.5.0-2.20210601004754.d67dac0.el8ost.2 RHEA-2021:3483 puppet-collectd 12.0.1-2.20210528063800.4686e16.el8ost.1 RHEA-2021:3483 puppet-concat 6.1.0-2.20210528022416.9baa8fc.el8ost.2 RHEA-2021:3483 puppet-contrail 1.0.1-2.20210528040000.6f87929.el8ost.2 RHEA-2021:3483 puppet-corosync 6.0.2-2.20210528025812.961add3.el8ost.2 RHEA-2021:3483 puppet-datacat 0.6.2-2.20210528040724.5cce8f2.el8ost.2 RHEA-2021:3483 puppet-designate 15.6.0-2.20210601020041.699d285.el8ost.2 RHEA-2021:3483 puppet-dns 6.2.1-2.20210528040902.2ae1cd7.el8ost.2 RHEA-2021:3483 puppet-ec2api 15.4.1-2.20210528041724.e38e26c.el8ost.2 RHEA-2021:3483 puppet-elasticsearch 6.4.0-2.20210528041855.725afd6.el8ost.2 RHEA-2021:3483 puppet-etcd 1.12.3-2.20210528042618.123d2af.el8ost.2 RHEA-2021:3483 puppet-fdio 18.2-2.20210528042751.6fd1c8e.el8ost.2 RHEA-2021:3483 puppet-firewall 2.1.0-2.20210528025107.4f4437a.el8ost.2 RHEA-2021:3483 puppet-git 0.5.0-2.20210528043748.4e4498e.el8ost.2 RHEA-2021:3483 puppet-glance 15.5.0-2.20210601005740.8a23345.el8ost.2 RHEA-2021:3483 puppet-gnocchi 15.5.0-2.20210601005850.c830d4b.el8ost.2 RHEA-2021:3483 puppet-haproxy 4.1.0-2.20210528044603.df96ffc.el8ost.2 RHEA-2021:3483 puppet-headless 5.5.10-10.el8ost.1 RHEA-2021:3483 puppet-heat 15.5.0-2.20210601010737.31e48ae.el8ost.2 RHEA-2021:3483 puppet-horizon 15.5.0-2.20210601010851.c300380.el8ost.2 RHEA-2021:3483 puppet-inifile 3.1.0-2.20210528022247.91efced.el8ost.2 RHEA-2021:3483 puppet-ipaclient 2.5.2-2.20210528044835.b086731.el8ost.2 RHEA-2021:3483 puppet-ironic 15.5.0-2.20210601011633.d553541.el8ost.2 RHEA-2021:3483 puppet-java 5.0.1-2.20210528045554.e57cbc8.el8ost.2 RHEA-2021:3483 puppet-kafka 5.3.1-2.20210528045828.88aa866.el8ost.2 RHEA-2021:3483 puppet-keepalived 0.0.2-2.20210528050548.bbca37a.el8ost.2 RHEA-2021:3483 puppet-keystone 15.5.0-2.20210601001735.1dc5b6e.el8ost.2 RHEA-2021:3483 puppet-kibana3 0.0.4-2.20210528050729.6ca9631.el8ost.2 RHEA-2021:3483 puppet-kmod 2.3.1-2.20210528051537.41e2a2b.el8ost.2 RHEA-2021:3483 puppet-manila 15.5.0-2.20210601014536.9c6604a.el8ost.2 RHEA-2021:3483 puppet-memcached 6.0.0-2.20210528123058.4c70dbd.el8ost.2 RHEA-2021:3483 puppet-midonet 1.0.0-2.20210528053422.a8cec1d.el8ost.1 RHEA-2021:3483 puppet-mistral 15.5.0-2.20210601011954.5dcd237.el8ost.2 RHEA-2021:3483 puppet-module-data 0.5.1-2.20210528052520.28dafce.el8ost.2 RHEA-2021:3483 puppet-mysql 10.4.0-2.20210528024030.95f9b98.el8ost.2 RHEA-2021:3483 puppet-n1k-vsm 0.0.2-2.20210528053535.92401b8.el8ost.2 RHEA-2021:3483 puppet-neutron 15.6.0-2.20210601015533.7f36270.el8ost.2 RHEA-2021:3483 puppet-nova 15.8.0-2.20210601013941.99789e3.el8ost.2 RHEA-2021:3483 puppet-nssdb 1.0.1-2.20210528031836.2ed2a2d.el8ost.2 RHEA-2021:3483 puppet-octavia 15.5.0-2.20210601021142.2f54828.el8ost.2 RHEA-2021:3483 puppet-opendaylight 8.4.3-2.20210528054417.bbe7ce5.el8ost.1 RHEA-2021:3483 puppet-openstack_extras 15.4.1-2.20210601022242.6ab7806.el8ost.2 RHEA-2021:3483 puppet-openstacklib 15.5.0-2.20210531234811.e3b61ab.el8ost.2 RHEA-2021:3483 puppet-oslo 15.5.0-2.20210531235814.883fa53.el8ost.2 RHEA-2021:3483 puppet-ovn 15.5.0-2.20210601013539.a6b0f69.el8ost.2 RHEA-2021:3483 puppet-pacemaker 1.1.0-2.20210528101831.6e272bf.el8ost.2 RHEA-2021:3483 puppet-panko 15.4.1-0.20191014140135.49b7b3e.el8ost.1 RHEA-2021:3483 puppet-placement 2.5.0-2.20210601002849.8fe110e.el8ost.2 RHEA-2021:3483 puppet-qdr 4.4.1-2.20210528054536.d141271.el8ost.2 RHEA-2021:3483 puppet-rabbitmq 10.1.2-2.20210528110135.8b9b006.el8ost.2 RHEA-2021:3483 puppet-redis 4.2.2-2.20210528033823.be8d097.el8ost.2 RHEA-2021:3483 puppet-remote 10.0.0-2.20210528032009.7420908.el8ost.2 RHEA-2021:3483 puppet-rsync 1.1.3-2.20210528081652.b3ee352.el8ost.2 RHEA-2021:3483 puppet-rsyslog 3.3.1-2.20210528055214.0c2b6c8.el8ost.2 RHEA-2021:3483 puppet-sahara 15.4.1-2.20210601012947.e8c5a9d.el8ost.2 RHEA-2021:3483 puppet-snmp 3.9.0-2.20210528055525.5d73485.el8ost.2 RHEA-2021:3483 puppet-ssh 6.0.0-2.20210528033955.65570a3.el8ost.2 RHEA-2021:3483 puppet-staging 1.0.4-2.20210528023218.b466d93.el8ost.2 RHEA-2021:3483 puppet-stdlib 6.1.0-2.20210527224837.5aa891c.el8ost.2 RHEA-2021:3483 puppet-swift 15.5.0-2.20210601012532.1fdb986.el8ost.2 RHEA-2021:3483 puppet-sysctl 0.0.12-2.20210528024924.a3d160d.el8ost.2 RHEA-2021:3483 puppet-systemd 2.10.0-2.20210528111029.03d94fa.el8ost.2 RHEA-2021:3483 puppet-timezone 5.1.1-2.20210528060202.21b4a58.el8ost.2 RHEA-2021:3483 puppet-tomcat 3.1.0-2.20210528051721.a3f92d1.el8ost.2 RHEA-2021:3483 puppet-tripleo 11.6.2-2.20210603175725.el8ost.2 RHEA-2021:3483 puppet-trove 15.4.1-2.20210601003850.0eacf4d.el8ost.2 RHEA-2021:3483 puppet-vcsrepo 3.0.0-2.20210528032828.b06d5d3.el8ost.2 RHEA-2021:3483 puppet-veritas_hyperscale 1.0.0-2.20210527173407.7c7868a.el8ost.2 RHEA-2021:3483 puppet-vswitch 11.5.0-2.20210601000818.5d96dab.el8ost.2 RHEA-2021:3483 puppet-xinetd 3.3.0-2.20210528030944.d768da2.el8ost.2 RHEA-2021:3483 puppet-zaqar 15.4.1-2.20210528060419.88b97ec.el8ost.2 RHEA-2021:3483 puppet-zookeeper 0.9.0-2.20210528052531.5877cbf.el8ost.2 RHEA-2021:3483 python-oslo-concurrency-lang 3.30.1-2.20210528084908.f4d2dd8.el8ost.1 RHEA-2021:3483 python-oslo-i18n-lang 3.24.0-2.20210527231638.91b39bb.el8ost.1 RHEA-2021:3483 python-oslo-log-lang 3.44.3-2.20210528064856.e19c407.el8ost.1 RHEA-2021:3483 python-oslo-utils-lang 3.41.6-2.20210528071646.f4deaad.el8ost.1 RHEA-2021:3483 python3-anyjson 0.3.3-13.1.el8ost.1 RHEA-2021:3483 python3-appdirs 1.4.0-10.el8ost.1 RHEA-2021:3483 python3-boto 2.45.0-12.el8ost.1 RHEA-2021:3483 python3-cliff 2.16.0-2.20210527234856.6b6b186.el8ost.1 RHEA-2021:3483 python3-cmd2 0.6.8-15.el8ost.1 RHEA-2021:3483 python3-dateutil 2.8.0-8.el8ost.1 RHEA-2021:3483 python3-debtcollector 1.22.0-2.20210527225841.0be4911.el8ost.1 RHEA-2021:3483 python3-dogpile-cache 1.1.2-1.1.el8ost.1 RHEA-2021:3483 python3-eventlet 0.25.2-5.el8ost.1 RHEA-2021:3483 python3-fasteners 0.14.1-20.el8ost.1 RHEA-2021:3483 python3-funcsigs 1.0.2-8.el8ost.1 RHEA-2021:3483 python3-greenlet 0.4.14-10.el8ost.1 RHEA-2021:3483 python3-heat-agent 1.10.1-2.20210528020147.96b819c.el8ost.2 RHEA-2021:3483 python3-heat-agent-ansible 1.10.1-2.20210528020147.96b819c.el8ost.2 RHEA-2021:3483 python3-heat-agent-apply-config 1.10.1-2.20210528020147.96b819c.el8ost.2 RHEA-2021:3483 python3-heat-agent-docker-cmd 1.10.1-2.20210528020147.96b819c.el8ost.2 RHEA-2021:3483 python3-heat-agent-hiera 1.10.1-2.20210528020147.96b819c.el8ost.2 RHEA-2021:3483 python3-heat-agent-json-file 1.10.1-2.20210528020147.96b819c.el8ost.2 RHEA-2021:3483 python3-heat-agent-puppet 1.10.1-2.20210528020147.96b819c.el8ost.2 RHEA-2021:3483 python3-heatclient 1.18.1-2.20210528082653.ed9edc6.el8ost.1 RHEA-2021:3483 python3-iso8601 0.1.12-8.el8ost.1 RHEA-2021:3483 python3-keystoneauth1 3.17.4-2.20210609184811.8dc7366.el8ost.1 RHEA-2021:3483 python3-keystoneclient 3.21.0-2.20210527233755.79f150f.el8ost.1 RHEA-2021:3483 python3-markupsafe 1.1.0-7.el8ost.1 RHEA-2021:3483 python3-monotonic 1.5-8.el8ost.1 RHEA-2021:3483 python3-more-itertools 4.1.0-7.el8ost.1 RHEA-2021:3483 python3-munch 2.2.0-8.el8ost.1 RHEA-2021:3483 python3-netifaces 0.10.9-9.el8ost.1 RHEA-2021:3483 python3-numpy 1.17.0-7.el8ost.2 RHEA-2021:3483 python3-numpy-f2py 1.17.0-7.el8ost.2 RHEA-2021:3483 python3-openstacksdk 0.36.5-2.20210528093819.feda828.el8ost.1 RHEA-2021:3483 python3-os-client-config 1.33.0-2.20210527235743.d0eea17.el8ost.1 RHEA-2021:3483 python3-os-service-types 1.7.0-2.20210527190446.0b2f473.el8ost.1 RHEA-2021:3483 python3-osc-lib 1.14.1-2.20210527161058.a0d9746.el8ost.1 RHEA-2021:3483 python3-oslo-concurrency 3.30.1-2.20210528084908.f4d2dd8.el8ost.1 RHEA-2021:3483 python3-oslo-config 6.11.3-2.20210528084814.9b1ccea.el8ost.1 RHEA-2021:3483 python3-oslo-context 2.23.1-2.20210528064426.ab17aef.el8ost.1 RHEA-2021:3483 python3-oslo-i18n 3.24.0-2.20210527231638.91b39bb.el8ost.1 RHEA-2021:3483 python3-oslo-log 3.44.3-2.20210528064856.e19c407.el8ost.1 RHEA-2021:3483 python3-oslo-serialization 2.29.3-2.20210528100828.a9c4bfa.el8ost.1 RHEA-2021:3483 python3-oslo-utils 3.41.6-2.20210528071646.f4deaad.el8ost.1 RHEA-2021:3483 python3-paunch 5.5.1-2.20210527204730.9b6bef4.el8ost.1 RHEA-2021:3483 python3-pbr 5.4.3-7.el8ost.1 RHEA-2021:3483 python3-prometheus_client 0.6.0-2.el8ost RHEA-2021:3483 python3-protobuf 3.6.1-5.el8ost.1 RHEA-2021:3483 python3-psutil 5.6.3-3.el8ost RHEA-2021:3483 python3-pyasn1 0.4.6-3.el8ost.2 RHEA-2021:3483 python3-pyparsing 2.4.2-1.el8ost.1 RHEA-2021:3483 python3-pysnmp 4.4.8-7.el8ost.1 RHEA-2021:3483 python3-pystache 0.5.3-8.el8ost.1 RHEA-2021:3483 python3-requestsexceptions 1.4.0-2.20210527160003.d7ac0ff.el8ost.1 RHEA-2021:3483 python3-rfc3986 1.2.0-11.el8ost.1 RHEA-2021:3483 python3-rsa 3.4.2-14.el8ost.1 RHEA-2021:3483 python3-simplejson 3.16.0-8.el8ost.1 RHEA-2021:3483 python3-six 1.12.0-2.el8ost RHEA-2021:3483 python3-stevedore 1.31.0-2.20210527225837.6817543.el8ost.1 RHEA-2021:3483 python3-swiftclient 3.8.1-2.20210527234845.72b90fe.el8ost.1 RHEA-2021:3483 python3-tenacity 5.1.1-8.el8ost.1 RHEA-2021:3483 python3-twisted 16.4.1-17.el8ost.1 RHEA-2021:3483 python3-wrapt 1.11.2-5.el8ost RHEA-2021:3483 python3-zaqarclient 1.12.0-2.20210528010027.9038bf6.el8ost.1 RHEA-2021:3483 python3-zope-event 4.2.0-14.2.el8ost RHEA-2021:3483 python3-zope-interface 4.4.3-3.el8ost RHEA-2021:3483 qpid-proton-c 0.32.0-2.el8 RHEA-2021:3483 rhosp-director-images-base 16.2-20210902.2.el8ost RHEA-2021:3485 rhosp-director-images-metadata 16.2-20210902.2.el8ost RHEA-2021:3485 rhosp-director-images-minimal 16.2-20210902.2.el8ost RHEA-2021:3485 rhosp-release 16.2.0-3.el8ost.1 RHEA-2021:3483 ruby-augeas 0.5.0-8.el8ost.1 RHEA-2021:3483 ruby-facter 3.9.3-14.el8ost.1 RHEA-2021:3483 ruby-shadow 2.5.0-7.el8ost.1 RHEA-2021:3483 rubygem-pathspec 0.2.1-10.el8ost RHEA-2021:3483 rubygem-rgen 0.6.6-7.1.el8ost.1 RHEA-2021:3483 yaml-cpp 0.6.1-13.el8ost.1 RHEA-2021:3483 | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/package_manifest/openstack-16.2-deployment-tools-for-rhel-8-ppc64le-rpms_2021-09-15 |
13.8. Date & Time | 13.8. Date & Time To configure time zone, date, and optionally settings for network time, select Date & Time at the Installation Summary screen. There are three ways for you to select a time zone: Using your mouse, click on the interactive map to select a specific city. A red pin appears indicating your selection. You can also scroll through the Region and City drop-down menus at the top of the screen to select your time zone. Select Etc at the bottom of the Region drop-down menu, then select your time zone in the menu adjusted to GMT/UTC, for example GMT+1 . If your city is not available on the map or in the drop-down menu, select the nearest major city in the same time zone. Alternatively you can use a Kickstart file, which will allow you to specify some additional time zones which are not available in the graphical interface. See the timezone command in timezone (required) for details. Note The list of available cities and regions comes from the Time Zone Database (tzdata) public domain, which is maintained by the Internet Assigned Numbers Authority (IANA). Red Hat cannot add cities or regions into this database. You can find more information at the official website, available at http://www.iana.org/time-zones . Specify a time zone even if you plan to use NTP (Network Time Protocol) to maintain the accuracy of the system clock. If you are connected to the network, the Network Time switch will be enabled. To set the date and time using NTP, leave the Network Time switch in the ON position and click the configuration icon to select which NTP servers Red Hat Enterprise Linux should use. To set the date and time manually, move the switch to the OFF position. The system clock should use your time zone selection to display the correct date and time at the bottom of the screen. If they are still incorrect, adjust them manually. Note that NTP servers might be unavailable at the time of installation. In such a case, enabling them will not set the time automatically. When the servers become available, the date and time will update. Once you have made your selection, click Done to return to the Installation Summary screen. Note To change your time zone configuration after you have completed the installation, visit the Date & Time section of the Settings dialog window. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-date-time-configuration-ppc |
Chapter 133. Stub | Chapter 133. Stub Both producer and consumer are supported The Stub component provides a simple way to stub out any physical endpoints while in development or testing, allowing you for example to run a route without needing to actually connect to a specific specific SMTP or HTTP endpoint. Just add stub: in front of any endpoint URI to stub out the endpoint. Internally the Stub component creates VM endpoints. The main difference between Stub and VM is that VM will validate the URI and parameters you give it, so putting vm: in front of a typical URI with query arguments will usually fail. Stub won't though, as it basically ignores all query parameters to let you quickly stub out one or more endpoints in your route temporarily. 133.1. Dependencies When using stub with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-stub-starter</artifactId> </dependency> 133.2. URI format Where someUri can be any URI with any query parameters. 133.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 133.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 133.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 133.4. Component Options The Stub component supports 10 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean concurrentConsumers (consumer) Sets the default number of concurrent threads processing exchanges. 1 int defaultPollTimeout (consumer (advanced)) The timeout (in milliseconds) used when polling. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown. 1000 int defaultBlockWhenFull (producer) Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will instead block and wait until the message can be accepted. false boolean defaultDiscardWhenFull (producer) Whether a thread that sends messages to a full SEDA queue will be discarded. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will give up sending and continue, meaning that the message was not sent to the SEDA queue. false boolean defaultOfferTimeout (producer) Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, where a configured timeout can be added to the block case. Utilizing the .offer(timeout) method of the underlining java queue. long lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean defaultQueueFactory (advanced) Sets the default queue factory. BlockingQueueFactory queueSize (advanced) Sets the default maximum capacity of the SEDA queue (i.e., the number of messages it can hold). 1000 int 133.5. Endpoint Options The Stub endpoint is configured using URI syntax: with the following path and query parameters: 133.5.1. Path Parameters (1 parameters) Name Description Default Type name (common) Required Name of queue. String 133.5.2. Query Parameters (18 parameters) Name Description Default Type size (common) The maximum capacity of the SEDA queue (i.e., the number of messages it can hold). Will by default use the defaultSize set on the SEDA component. 1000 int bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean concurrentConsumers (consumer) Number of concurrent threads processing exchanges. 1 int exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern limitConcurrentConsumers (consumer (advanced)) Whether to limit the number of concurrentConsumers to the maximum of 500. By default, an exception will be thrown if an endpoint is configured with a greater number. You can disable that check by turning this option off. true boolean multipleConsumers (consumer (advanced)) Specifies whether multiple consumers are allowed. If enabled, you can use SEDA for Publish-Subscribe messaging. That is, you can send a message to the SEDA queue and have each consumer receive a copy of the message. When enabled, this option should be specified on every consumer endpoint. false boolean pollTimeout (consumer (advanced)) The timeout (in milliseconds) used when polling. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown. 1000 int purgeWhenStopping (consumer (advanced)) Whether to purge the task queue when stopping the consumer/route. This allows to stop faster, as any pending messages on the queue is discarded. false boolean blockWhenFull (producer) Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will instead block and wait until the message can be accepted. false boolean discardIfNoConsumers (producer) Whether the producer should discard the message (do not add the message to the queue), when sending to a queue with no active consumers. Only one of the options discardIfNoConsumers and failIfNoConsumers can be enabled at the same time. false boolean discardWhenFull (producer) Whether a thread that sends messages to a full SEDA queue will be discarded. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will give up sending and continue, meaning that the message was not sent to the SEDA queue. false boolean failIfNoConsumers (producer) Whether the producer should fail by throwing an exception, when sending to a queue with no active consumers. Only one of the options discardIfNoConsumers and failIfNoConsumers can be enabled at the same time. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean offerTimeout (producer) Offer timeout (in milliseconds) can be added to the block case when queue is full. You can disable timeout by using 0 or a negative value. long timeout (producer) Timeout (in milliseconds) before a SEDA producer will stop waiting for an asynchronous task to complete. You can disable timeout by using 0 or a negative value. 30000 long waitForTaskToComplete (producer) Option to specify whether the caller should wait for the async task to complete or not before continuing. The following three options are supported: Always, Never or IfReplyExpected. The first two values are self-explanatory. The last value, IfReplyExpected, will only wait if the message is Request Reply based. The default option is IfReplyExpected. Enum values: Never IfReplyExpected Always IfReplyExpected WaitForTaskToComplete queue (advanced) Define the queue instance which will be used by the endpoint. BlockingQueue 133.6. Examples Here are a few samples of stubbing endpoint uris 133.7. Spring Boot Auto-Configuration The component supports 11 options, which are listed below. Name Description Default Type camel.component.stub.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.stub.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.stub.concurrent-consumers Sets the default number of concurrent threads processing exchanges. 1 Integer camel.component.stub.default-block-when-full Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will instead block and wait until the message can be accepted. false Boolean camel.component.stub.default-discard-when-full Whether a thread that sends messages to a full SEDA queue will be discarded. By default, an exception will be thrown stating that the queue is full. By enabling this option, the calling thread will give up sending and continue, meaning that the message was not sent to the SEDA queue. false Boolean camel.component.stub.default-offer-timeout Whether a thread that sends messages to a full SEDA queue will block until the queue's capacity is no longer exhausted. By default, an exception will be thrown stating that the queue is full. By enabling this option, where a configured timeout can be added to the block case. Utilizing the .offer(timeout) method of the underlining java queue. Long camel.component.stub.default-poll-timeout The timeout (in milliseconds) used when polling. When a timeout occurs, the consumer can check whether it is allowed to continue running. Setting a lower value allows the consumer to react more quickly upon shutdown. 1000 Integer camel.component.stub.default-queue-factory Sets the default queue factory. The option is a org.apache.camel.component.seda.BlockingQueueFactory<org.apache.camel.Exchange> type. BlockingQueueFactory camel.component.stub.enabled Whether to enable auto configuration of the stub component. This is enabled by default. Boolean camel.component.stub.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.stub.queue-size Sets the default maximum capacity of the SEDA queue (i.e., the number of messages it can hold). 1000 Integer | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-stub-starter</artifactId> </dependency>",
"stub:someUri",
"stub:name",
"stub:smtp://somehost.foo.com?user=whatnot&something=else stub:http://somehost.bar.com/something"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-stub-component-starter |
7.5. Configuring System Memory Capacity | 7.5. Configuring System Memory Capacity This section discusses memory-related kernel parameters that may be useful in improving memory utilization on your system. These parameters can be temporarily set for testing purposes by altering the value of the corresponding file in the /proc file system. Once you have determined the values that produce optimal performance for your use case, you can set them permanently by using the sysctl command. Memory usage is typically configured by setting the value of one or more kernel parameters. These parameters can be set temporarily by altering the contents of files in the /proc file system, or they can be set persistently with the sysctl tool, which is provided by the procps-ng package. For example, to set the overcommit_memory parameter to 1 temporarily, run the following command: To set this value persistently, add sysctl vm.overcommit_memory=1 in /etc/sysctl.conf then run the following command: Setting a parameter temporarily is useful for determining the effect the parameter has on your system. You can then set the parameter persistently when you are sure that the parameter's value has the desired effect. Note To expand your expertise, you might also be interested in the Red Hat Enterprise Linux Performance Tuning (RH442) training course. 7.5.1. Virtual Memory Parameters The parameters listed in this section are located in /proc/sys/vm unless otherwise indicated. dirty_ratio A percentage value. When this percentage of total system memory is modified, the system begins writing the modifications to disk with the pdflush operation. The default value is 20 percent. dirty_background_ratio A percentage value. When this percentage of total system memory is modified, the system begins writing the modifications to disk in the background. The default value is 10 percent. overcommit_memory Defines the conditions that determine whether a large memory request is accepted or denied. The default value is 0 . By default, the kernel performs heuristic memory overcommit handling by estimating the amount of memory available and failing requests that are too large. However, since memory is allocated using a heuristic rather than a precise algorithm, overloading memory is possible with this setting. When this parameter is set to 1 , the kernel performs no memory overcommit handling. This increases the possibility of memory overload, but improves performance for memory-intensive tasks. When this parameter is set to 2 , the kernel denies requests for memory equal to or larger than the sum of total available swap space and the percentage of physical RAM specified in overcommit_ratio . This reduces the risk of overcommitting memory, but is recommended only for systems with swap areas larger than their physical memory. overcommit_ratio Specifies the percentage of physical RAM considered when overcommit_memory is set to 2 . The default value is 50 . max_map_count Defines the maximum number of memory map areas that a process can use. The default value ( 65530 ) is appropriate for most cases. Increase this value if your application needs to map more than this number of files. min_free_kbytes Specifies the minimum number of kilobytes to keep free across the system. This is used to determine an appropriate value for each low memory zone, each of which is assigned a number of reserved free pages in proportion to their size. Warning Extreme values can damage your system. Setting min_free_kbytes to an extremely low value prevents the system from reclaiming memory, which can result in system hangs and OOM-killing processes. However, setting min_free_kbytes too high (for example, to 5-10% of total system memory) causes the system to enter an out-of-memory state immediately, resulting in the system spending too much time reclaiming memory. oom_adj In the event that the system runs out of memory and the panic_on_oom parameter is set to 0 , the oom_killer function kills processes until the system can recover, starting from the process with the highest oom_score . The oom_adj parameter helps determine the oom_score of a process. This parameter is set per process identifier. A value of -17 disables the oom_killer for that process. Other valid values are from -16 to 15 . Note Processes spawned by an adjusted process inherit the oom_score of the process. swappiness The swappiness value, ranging from 0 to 100 , controls the degree to which the system favors anonymous memory or the page cache. A high value improves file-system performance while aggressively swapping less active processes out of RAM. A low value avoids swapping processes out of memory, which usually decreases latency at the cost of I/O performance. The default value is 60 . Warning Setting swappiness==0 will very aggressively avoids swapping out, which increase the risk of OOM killing under strong memory and I/O pressure. 7.5.2. File System Parameters Parameters listed in this section are located in /proc/sys/fs unless otherwise indicated. aio-max-nr Defines the maximum allowed number of events in all active asynchronous input/output contexts. The default value is 65536 . Modifying this value does not pre-allocate or resize any kernel data structures. file-max Determines the maximum number of file handles for the entire system. The default value on Red Hat Enterprise Linux 7 is the maximum of either 8192 , or one tenth of the free memory pages available at the time the kernel starts. Raising this value can resolve errors caused by a lack of available file handles. 7.5.3. Kernel Parameters Default values for the following parameters, located in the /proc/sys/kernel/ directory, can be calculated by the kernel at boot time depending on available system resources. msgmax Defines the maximum allowable size in bytes of any single message in a message queue. This value must not exceed the size of the queue ( msgmnb ). To determine the current msgmax value on your system, use: msgmnb Defines the maximum size in bytes of a single message queue. To determine the current msgmnb value on your system, use: msgmni Defines the maximum number of message queue identifiers, and therefore the maximum number of queues. To determine the current msgmni value on your system, use: shmall Defines the total amount of shared memory pages that can be used on the system at one time. A page is 4096 bytes on the AMD64 and Intel 64 architecture, for example. To determine the current shmall value on your system, use: shmmax Defines the maximum size (in bytes) of a single shared memory segment allowed by the kernel. To determine the current shmmax value on your system, use: shmmni Defines the system-wide maximum number of shared memory segments. The default value is 4096 on all systems. threads-max Defines the system-wide maximum number of threads available to the kernel at one time. To determine the current threads-max value on your system, use: The default value is the result of: The minimum value is 20 . | [
"echo 1 > /proc/sys/vm/overcommit_memory",
"sysctl -p",
"sysctl kernel.msgmax",
"sysctl kernel.msgmnb",
"sysctl kernel.msgmni",
"sysctl kernel.shmall",
"sysctl kernel.shmmax",
"sysctl kernel.threads-max",
"mempages / (8 * THREAD_SIZE / PAGE SIZE )"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Configuration_tools-Configuring_system_memory_capacity |
Chapter 15. Virtual File Systems and Disk Management | Chapter 15. Virtual File Systems and Disk Management 15.1. GVFS GVFS ( GNOME Virtual File System ) is an extension of the virtual file system interface provided by the libraries the GNOME Desktop is built on. GVFS provides complete virtual file system infrastructure and handles storage in the GNOME Desktop. GVFS uses addresses for full identification based on the URI (Uniform Resource Identifier) standard, syntactically similar to URL addresses used in web browsers. These addresses in form of schema://user@server/path are the key information determining the kind of service. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/virtual-file-systems-disk-management |
Chapter 14. Verifying image signatures | Chapter 14. Verifying image signatures You can use Red Hat Advanced Cluster Security for Kubernetes (RHACS) to ensure the integrity of the container images in your clusters by verifying image signatures against pre-configured keys. You can create policies to block unsigned images and images that do not have a verified signature. You can also enforce the policy by using the RHACS admission controller to stop unauthorized deployment creation. Note RHACS only supports Cosign signatures and Cosign Public Keys/Certificates verification. For more information about Cosign, see Cosign overview . For Cosign signature verification, RHACS does not support communication with the transparency log Rekor . You must configure signature integration with at least 1 Cosign verification method for signature verification. For all deployed and watched images: RHACS fetches and verifies the signatures every 4 hours. RHACS verifies the signatures whenever you change or update your signature integration verification data. 14.1. Configuring signature integration Before performing image signature verification, you must first create a signature integration in RHACS. A signature integration can be configured with multiple verification methods. The following verification methods are supported: Cosign public keys Cosign certificates 14.1.1. Configuring Cosign public keys Prerequisites You must already have a PEM-encoded Cosign public key. For more information about Cosign, see Cosign overview . Procedure In the RHACS portal, select Platform Configuration Integrations . Scroll to Signature Integrations and click Signature . Click New integration . Enter a name for the Integration name . Click Cosign public Keys Add a new public key . Enter the Public key name. For the Public key value field, enter the PEM-encoded public key. (Optional) You can add more than one key by clicking Add a new public key and entering the details. Click Save . 14.1.2. Configuring Cosign certificates Prerequisites You must already have the certificate identity and issuer. Optionally, you also need a PEM-encoded certificate and chain. For more information about Cosign certificates, see Cosign certificate verification Procedure In the RHACS portal, select Platform Configuration Integrations . Scroll to Signature Integrations and click Signature . Click New integration . Enter a name for the Integration name . Click Cosign certificates Add a new certificate verification . Enter the Certificate OIDC Issuer . You can optionally use regular expressions in RE2 Syntax . Enter the Certificate identity . You can optionally use regular expressions in RE2 Syntax . (Optional) Enter the Certificate Chain PEM encoded to verify certificates. If no chain is provided, certificates are verified against the Fulcio root. (Optional) Enter the Certificate PEM encoded to verify the signature. (Optional) You can add more than one certificate verification by clicking Add a new certificate verification and entering the details. Click Save . 14.2. Using signature verification in a policy When creating custom security policies, you can use the Trusted image signers policy criteria to verify image signatures. Prerequisites You must have already configured a signature integration with at least 1 Cosign public key. Procedure When creating or editing a policy, drag the Not verified by trusted image signers policy criteria in the policy field drop area for the Policy criteria section. Click Select . Select the trusted image signers from the list and click Save . Additional resources Creating a security policy from the system policies view Policy criteria 14.3. Enforcing signature verification To prevent the users from using unsigned images, you can enforce signature verification by using the RHACS admission controller. You must first enable the Contact Image Scanners feature in your cluster configuration settings. Then, while creating a security policy to enforce signature verification, you can use the Inform and enforce option. For more information, see Enabling admission controller enforcement . Additional resources Creating a security policy from the system policies view | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/operating/verify-image-signatures |
19.4. Kerberos and PAM | 19.4. Kerberos and PAM Currently, kerberized services do not make use of Pluggable Authentication Modules (PAM) - kerberized servers bypass PAM completely. However, applications that use PAM can make use of Kerberos for authentication if the pam_krb5 module (provided in the pam_krb5 package) is installed. The pam_krb5 package contains sample configuration files that allow services like login and gdm to authenticate users as well as obtain initial credentials using their passwords. If access to network servers is always performed using kerberized services or services that use GSS-API, such as IMAP, the network can be considered reasonably safe. Note Administrators should be careful to not allow users to authenticate to most network services using Kerberos passwords. Many protocols used by these services do not encrypt the password before sending it over the network, destroying the benefits of the Kerberos system. For example, users should not be allowed to authenticate using their Kerberos passwords over Telnet. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-kerberos-pam |
3.2. Setting up Certificate Profiles | 3.2. Setting up Certificate Profiles In Certificate System, you can add, delete, and modify enrollment profiles: Using the PKI command-line interface Using the Java-based administration console This section provides information on each method. 3.2.1. Managing Certificate Enrollment Profiles Using the PKI Command-line Interface This section describes how to manage certificate profiles using the pki utility. For further details, see the pki-ca-profile (1) man page. Note Using the raw format is recommended. For details on each attribute and field of the profile, see the section Creating and Editing Certificate Profiles Directly on the File System in Red Hat Certificate System Planning, Installation and Deployment Guide. 3.2.1.1. Enabling and Disabling a Certificate Profile Before you can edit a certificate profile, you must disable it. After the modification is complete, you can re-enable the profile. Note Only CA agents can enable and disable certificate profiles. For example, to disable the caCMCECserverCert certificate profile: For example, to enable the caCMCECserverCert certificate profile: 3.2.1.2. Creating a Certificate Profile in Raw Format To create a new profile in raw format: Note In raw format, specify the new profile ID as follows: 3.2.1.3. Editing a Certificate Profile in Raw Format CA administrators can edit a certificate profile in raw format without manually downloading the configuration file. For example, to edit the caCMCECserverCert profile: This command automatically downloads the profile configuration in raw format and opens it in the VI editor. When you close the editor, the profile configuration is updated on the server. You do not need to restart the CA after editing a profile. Important Before you can edit a profile, disable the profile. For details, see Section 3.2.1.1, "Enabling and Disabling a Certificate Profile" . Example 3.2. Editing a Certificate Profile in RAW Format For example, to edit the caCMCserverCert profile to accept multiple user-supplied extensions: Disable the profile as a CA agent: Edit the profile as a CA administrator: Download and open the profile in the VI editor: Update the configuration to accept the extensions. For details, see Example B.3, "Multiple User Supplied Extensions in CSR" . Enable the profile as a CA agent: 3.2.1.4. Deleting a Certificate Profile To delete a certificate profile: Important Before you can delete a profile, disable the profile. For details, see Section 3.2.1.1, "Enabling and Disabling a Certificate Profile" . 3.2.2. Managing Certificate Enrollment Profiles Using the Java-based Administration Console Important pkiconsole is being deprecated. 3.2.2.1. Creating Certificate Profiles through the CA Console For security reasons, the Certificate Systems enforces separation of roles whereby an existing certificate profile can only be edited by an administrator after it was allowed by an agent. To add a new certificate profile or modify an existing certificate profile, perform the following steps as the administrator: Log in to the Certificate System CA subsystem console. In the Configuration tab, select Certificate Manager , and then select Certificate Profiles . The Certificate Profile Instances Management tab, which lists configured certificate profiles, opens. To create a new certificate profile, click Add . In the Select Certificate Profile Plugin Implementation window, select the type of certificate for which the profile is being created. Fill in the profile information in the Certificate Profile Instance Editor . Certificate Profile Instance ID . This is the ID used by the system to identify the profile. Certificate Profile Name . This is the user-friendly name for the profile. Certificate Profile Description . End User Certificate Profile . This sets whether the request must be made through the input form for the profile. This is usually set to true . Setting this to false allows a signed request to be processed through the Certificate Manager's certificate profile framework, rather than through the input page for the certificate profile. Certificate Profile Authentication . This sets the authentication method. An automated authentication is set by providing the instance ID for the authentication instance. If this field is blank, the authentication method is agent-approved enrollment; the request is submitted to the request queue of the agent services interface. Unless it is for a TMS subsystem, administrators must select one of the following authentication plug-ins: CMCAuth : Use this plug-in when a CA agent must approve and submit the enrollment request. CMCUserSignedAuth : Use this plug-in to enable non-agent users to enroll own certificates. Click OK . The plug-in editor closes, and the new profile is listed in the profiles tab. Configure the policies, inputs, and outputs for the new profile. Select the new profile from the list, and click Edit/View . Set up policies in the Policies tab of the Certificate Profile Rule Editor window. The Policies tab lists policies that are already set by default for the profile type. To add a policy, click Add . Choose the default from the Default field, choose the constraints associated with that policy in the Constraints field, and click OK . Fill in the policy set ID. When issuing dual key pairs, separate policy sets define the policies associated with each certificate. Then fill in the certificate profile policy ID, a name or identifier for the certificate profile policy. Configure any parameters in the Defaults and Constraints tabs. Defaults defines attributes that populate the certificate request, which in turn determines the content of the certificate. These can be extensions, validity periods, or other fields contained in the certificates. Constraints defines valid values for the defaults. See Section B.1, "Defaults Reference" and Section B.2, "Constraints Reference" for complete details for each default or constraint. To modify an existing policy, select a policy, and click Edit . Then edit the default and constraints for that policy. To delete a policy, select the policy, and click Delete . Set inputs in the Inputs tab of the Certificate Profile Rule Editor window. There can be more than one input type for a profile. Note Unless you configure the profile for a TMS subsystem, select only cmcCertReqInput and delete other profiles by selecting them and clicking the Delete button. To add an input, click Add . Choose the input from the list, and click OK . See Section A.1, "Input Reference" for complete details of the default inputs. The New Certificate Profile Editor window opens. Set the input ID, and click OK . Inputs can be added and deleted. It is possible to select edit for an input, but since inputs have no parameters or other settings, there is nothing to configure. To delete an input, select the input, and click Delete . Set up outputs in the Outputs tab of the Certificate Profile Rule Editor window. Outputs must be set for any certificate profile that uses an automated authentication method; no output needs to be set for any certificate profile that uses agent-approved authentication. The Certificate Output type is set by default for all profiles and is added automatically to custom profiles. Unless you configure the profile for a TMS subsystem, select only certOutput . Outputs can be added and deleted. It is possible to select edit for an output, but since outputs have no parameters or other settings, there is nothing to configure. To add an output, click Add . Choose the output from the list, and click OK . Give a name or identifier for the output, and click OK . This output will be listed in the output tab. You can edit it to provide values to the parameters in this output. To delete an output, select the output from list, and click Delete . Restart the CA to apply the new profile. After creating the profile as an administrator, a CA agent has to approve the profile in the agent services pages to enable the profile. Open the CA's services page. Click the Manage Certificate Profiles link. This page lists all of the certificate profiles that have been set up by an administrator, both active and inactive. Click the name of the certificate profile to approve. At the bottom of the page, click the Enable button. Note If this profile will be used with a TPS, then the TPS must be configured to recognized the profile type. This is in 11.1.4. Managing Smart Card CA Profiles in Red Hat Certificate System's Planning, Installation, and Deployment Guide. Authorization methods for the profiles can only be added to the profile using the command line, as described in the section Creating and Editing Certificate Profiles Directly on the File System in Red Hat Certificate System Planning, Installation and Deployment Guide. 3.2.2.2. Editing Certificate Profiles in the Console To modify an existing certificate profile: Log into the agent services pages and disable the profile. Once a certificate profile is enabled by an agent, that certificate profile is marked enabled in the Certificate Profile Instance Management tab, and the certificate profile cannot be edited in any way through the console. Log in to the Certificate System CA subsystem console. In the Configuration tab, select Certificate Manager , and then select Certificate Profiles . Select the certificate profile, and click Edit/View . The Certificate Profile Rule Editor window appears. Many any changes to the defaults, constraints, inputs, or outputs. Note The profile instance ID cannot be modified. If necessary, enlarge the window by pulling out one of the corners of the window. Restart the CA to apply the changes. In the agent services page, re-enable the profile. Note Delete any certificate profiles that will not be approved by an agent. Any certificate profile that appears in the Certificate Profile Instance Management tab also appears in the agent services interface. If a profile has already been enabled, it must be disabled by the agent before it can be deleted from the profile list. 3.2.3. Listing Certificate Enrollment Profiles The following pre-defined certificate profiles are ready to use and set up in this environment when the Certificate System CA is installed. These certificate profiles have been designed for the most common types of certificates, and they provide common defaults, constraints, authentication methods, inputs, and outputs. To list the available profiles on the command line, use the pki utility. For example: For further details, see the pki-ca-profile (1) man page. Additional information can also be found at Red Hat Certificate System Planning, Installation, and Deployment Guide . 3.2.4. Displaying Details of a Certificate Enrollment Profile For example, to display a specific certificate profile, such as caECFullCMCUserSignedCert : For example, to display a specific certificate profile, such as caECFullCMCUserSignedCert , in raw format: For further details, see the pki-ca-profile (1) man page. | [
"pki -c password -n caagent ca-profile-disable caCMCECserverCert",
"pki -c password -n caagent ca-profile-enable caCMCECserverCert",
"pki -c password -n caadmin ca-profile-add profile_name .cfg --raw",
"profileId= profile_name",
"pki -c password -n caadmin ca-profile-edit caCMCECserverCert",
"pki -c password -n caagemt ca-profile-disable caCMCserverCert",
"pki -c password -n caadmin ca-profile-edit caCMCserverCert",
"pki -c password -n caagent ca-profile-enable caCMCserverCert",
"pki -c password -n caadmin ca-profile-del profile_name",
"pkiconsole https://server.example.com:8443/ca",
"systemctl restart pki-tomcatd-nuxwdog@ instance_name .service",
"https://server.example.com:8443/ca/services",
"pkiconsole https://server.example.com:8443/ca",
"pki -c password -n caadmin ca-profile-find ------------------ 59 entries matched ------------------ Profile ID: caCMCserverCert Name: Server Certificate Enrollment using CMC Description: This certificate profile is for enrolling server certificates using CMC. Profile ID: caCMCECserverCert Name: Server Certificate wth ECC keys Enrollment using CMC Description: This certificate profile is for enrolling server certificates with ECC keys using CMC. Profile ID: caCMCECsubsystemCert Name: Subsystem Certificate Enrollment with ECC keys using CMC Description: This certificate profile is for enrolling subsystem certificates with ECC keys using CMC. Profile ID: caCMCsubsystemCert Name: Subsystem Certificate Enrollment using CMC Description: This certificate profile is for enrolling subsystem certificates using CMC. ----------------------------- Number of entries returned 20",
"pki -c password -n caadmin ca-profile-show caECFullCMCUserSignedCert ----------------------------------- Profile \"caECFullCMCUserSignedCert\" ----------------------------------- Profile ID: caECFullCMCUserSignedCert Name: User-Signed CMC-Authenticated User Certificate Enrollment Description: This certificate profile is for enrolling user certificates with EC keys by using the CMC certificate request with non-agent user CMC authentication. Name: Certificate Request Input Class: cmcCertReqInputImpl Attribute Name: cert_request Attribute Description: Certificate Request Attribute Syntax: cert_request Name: Certificate Output Class: certOutputImpl Attribute Name: pretty_cert Attribute Description: Certificate Pretty Print Attribute Syntax: pretty_print Attribute Name: b64_cert Attribute Description: Certificate Base-64 Encoded Attribute Syntax: pretty_print",
"pki -c password -n caadmin ca-profile-show caECFullCMCUserSignedCert --raw #Wed Jul 25 14:41:35 PDT 2018 auth.instance_id=CMCUserSignedAuth policyset.cmcUserCertSet.1.default.params.name= policyset.cmcUserCertSet.4.default.class_id=authorityKeyIdentifierExtDefaultImpl policyset.cmcUserCertSet.6.default.params.keyUsageKeyCertSign=false policyset.cmcUserCertSet.10.default.class_id=noDefaultImpl policyset.cmcUserCertSet.10.constraint.name=Renewal Grace Period Constraint output.o1.class_id=certOutputImpl"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/setting_up_certificate_profiles |
Service Mesh | Service Mesh Red Hat OpenShift Service on AWS 4 Service Mesh installation, usage, and release notes Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/service_mesh/index |
Chapter 22. Monitoring performance by using the metrics RHEL System Role | Chapter 22. Monitoring performance by using the metrics RHEL System Role As a system administrator, you can use the metrics RHEL System Role with any Ansible Automation Platform control node to monitor the performance of a system. 22.1. Introduction to the metrics System Role RHEL System Roles is a collection of Ansible roles and modules that provide a consistent configuration interface to remotely manage multiple RHEL systems. The metrics System Role configures performance analysis services for the local system and, optionally, includes a list of remote systems to be monitored by the local system. The metrics System Role enables you to use pcp to monitor your systems performance without having to configure pcp separately, as the set-up and deployment of pcp is handled by the playbook. Table 22.1. metrics system role variables Role variable Description Example usage metrics_monitored_hosts List of remote hosts to be analyzed by the target host. These hosts will have metrics recorded on the target host, so ensure enough disk space exists below /var/log for each host. metrics_monitored_hosts: [" webserver.example.com ", " database.example.com "] metrics_retention_days Configures the number of days for performance data retention before deletion. metrics_retention_days: 14 metrics_graph_service A boolean flag that enables the host to be set up with services for performance data visualization via pcp and grafana . Set to false by default. metrics_graph_service: no metrics_query_service A boolean flag that enables the host to be set up with time series query services for querying recorded pcp metrics via redis . Set to false by default. metrics_query_service: no metrics_provider Specifies which metrics collector to use to provide metrics. Currently, pcp is the only supported metrics provider. metrics_provider: "pcp" metrics_manage_firewall Uses the firewall role to manage port access directly from the metrics role. Set to false by default. metrics_manage_firewall: true metrics_manage_selinux Uses the selinux role to manage port access directly from the metrics role. Set to false by default. metrics_manage_selinux: true Note For details about the parameters used in metrics_connections and additional information about the metrics System Role, see the /usr/share/ansible/roles/rhel-system-roles.metrics/README.md file. 22.2. Using the metrics System Role to monitor your local system with visualization This procedure describes how to use the metrics RHEL System Role to monitor your local system while simultaneously provisioning data visualization via Grafana . Prerequisites The Ansible Core package is installed on the control machine. You have the rhel-system-roles package installed on the machine you want to monitor. Procedure Configure localhost in the /etc/ansible/hosts Ansible inventory by adding the following content to the inventory: Create an Ansible playbook with the following content: Run the Ansible playbook: Note Since the metrics_graph_service boolean is set to value="yes", Grafana is automatically installed and provisioned with pcp added as a data source. Since metrics_manage_firewall and metrics_manage_selinux are both set to true, the metrics role will use the firewall and selinux system roles to manage the ports used by the metrics role. To view visualization of the metrics being collected on your machine, access the grafana web interface as described in Accessing the Grafana web UI . 22.3. Using the metrics System Role to setup a fleet of individual systems to monitor themselves This procedure describes how to use the metrics System Role to set up a fleet of machines to monitor themselves. Prerequisites The Ansible Core package is installed on the control machine. You have the rhel-system-roles package installed on the machine you want to use to run the playbook. You have the SSH connection established. Procedure Add the name or IP of the machines you want to monitor via the playbook to the /etc/ansible/hosts Ansible inventory file under an identifying group name enclosed in brackets: Create an Ansible playbook with the following content: Note Since metrics_manage_firewall and metrics_manage_selinux are both set to true, the metrics role will use the firewall and selinux roles to manage the ports used by the metrics role. Run the Ansible playbook: Where the -k prompt for password to connect to remote system. 22.4. Using the metrics System Role to monitor a fleet of machines centrally via your local machine This procedure describes how to use the metrics System Role to set up your local machine to centrally monitor a fleet of machines while also provisioning visualization of the data via grafana and querying of the data via redis . Prerequisites The Ansible Core package is installed on the control machine. You have the rhel-system-roles package installed on the machine you want to use to run the playbook. Procedure Create an Ansible playbook with the following content: Run the Ansible playbook: Note Since the metrics_graph_service and metrics_query_service booleans are set to value="yes", grafana is automatically installed and provisioned with pcp added as a data source with the pcp data recording indexed into redis , allowing the pcp querying language to be used for complex querying of the data. Since metrics_manage_firewall and metrics_manage_selinux are both set to true, the metrics role will use the firewall and selinux roles to manage the ports used by the metrics role. To view graphical representation of the metrics being collected centrally by your machine and to query the data, access the grafana web interface as described in Accessing the Grafana web UI . 22.5. Setting up authentication while monitoring a system using the metrics System Role PCP supports the scram-sha-256 authentication mechanism through the Simple Authentication Security Layer (SASL) framework. The metrics RHEL System Role automates the steps to setup authentication using the scram-sha-256 authentication mechanism. This procedure describes how to setup authentication using the metrics RHEL System Role. Prerequisites The Ansible Core package is installed on the control machine. You have the rhel-system-roles package installed on the machine you want to use to run the playbook. Procedure Include the following variables in the Ansible playbook you want to setup authentication for: Note Since metrics_manage_firewall and metrics_manage_selinux are both set to true, the metrics role will use the firewall and selinux roles to manage the ports used by the metrics role. Run the Ansible playbook: Verification steps Verify the sasl configuration: ip_adress should be replaced by the IP address of the host. 22.6. Using the metrics System Role to configure and enable metrics collection for SQL Server This procedure describes how to use the metrics RHEL System Role to automate the configuration and enabling of metrics collection for Microsoft SQL Server via pcp on your local system. Prerequisites The Ansible Core package is installed on the control machine. You have the rhel-system-roles package installed on the machine you want to monitor. You have installed Microsoft SQL Server for Red Hat Enterprise Linux and established a 'trusted' connection to an SQL server. See Install SQL Server and create a database on Red Hat . You have installed the Microsoft ODBC driver for SQL Server for Red Hat Enterprise Linux. See Red Hat Enterprise Server and Oracle Linux . Procedure Configure localhost in the /etc/ansible/hosts Ansible inventory by adding the following content to the inventory: Create an Ansible playbook that contains the following content: Note Since metrics_manage_firewall and metrics_manage_selinux are both set to true, the metrics role will use the firewall and selinux roles to manage the ports used by the metrics role. Run the Ansible playbook: Verification steps Use the pcp command to verify that SQL Server PMDA agent (mssql) is loaded and running: Additional resources For more information about using Performance Co-Pilot for Microsoft SQL Server, see this Red Hat Developers Blog post. | [
"localhost ansible_connection=local",
"--- - name: Manage metrics hosts: localhost vars: metrics_graph_service: yes metrics_manage_firewall: true metrics_manage_selinux: true roles: - rhel-system-roles.metrics",
"ansible-playbook name_of_your_playbook .yml",
"[remotes] webserver.example.com database.example.com",
"--- - hosts: remotes vars: metrics_retention_days: 0 metrics_manage_firewall: true metrics_manage_selinux: true roles: - rhel-system-roles.metrics",
"ansible-playbook name_of_your_playbook .yml -k",
"--- - hosts: localhost vars: metrics_graph_service: yes metrics_query_service: yes metrics_retention_days: 10 metrics_monitored_hosts: [\" database.example.com \", \" webserver.example.com \"] metrics_manage_firewall: yes metrics_manage_selinux: yes roles: - rhel-system-roles.metrics",
"ansible-playbook name_of_your_playbook .yml",
"--- vars: metrics_username: your_username metrics_password: your_password metrics_manage_firewall: true metrics_manage_selinux: true",
"ansible-playbook name_of_your_playbook .yml",
"pminfo -f -h \"pcp:// ip_adress ?username= your_username \" disk.dev.read Password: disk.dev.read inst [0 or \"sda\"] value 19540",
"localhost ansible_connection=local",
"--- - hosts: localhost vars: metrics_from_mssql: true metrics_manage_firewall: true metrics_manage_selinux: true roles: - role: rhel-system-roles.metrics",
"ansible-playbook name_of_your_playbook .yml",
"pcp platform: Linux rhel82-2.local 4.18.0-167.el8.x86_64 #1 SMP Sun Dec 15 01:24:23 UTC 2019 x86_64 hardware: 2 cpus, 1 disk, 1 node, 2770MB RAM timezone: PDT+7 services: pmcd pmproxy pmcd: Version 5.0.2-1, 12 agents, 4 clients pmda: root pmcd proc pmproxy xfs linux nfsclient mmv kvm mssql jbd2 dm pmlogger: primary logger: /var/log/pcp/pmlogger/rhel82-2.local/20200326.16.31 pmie: primary engine: /var/log/pcp/pmie/rhel82-2.local/pmie.log"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/automating_system_administration_by_using_rhel_system_roles_in_rhel_7.9/monitoring-performance-by-using-the-metrics-rhel-system-role_automating-system-administration-by-using-rhel-system-roles |
Network Observability | Network Observability OpenShift Container Platform 4.18 Configuring and using the Network Observability Operator in OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-from-hostnetwork namespace: netobserv spec: podSelector: matchLabels: app: netobserv-operator ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/host-network: '' policyTypes: - Ingress",
"apiVersion: v1 kind: Secret metadata: name: loki-s3 namespace: netobserv 1 stringData: access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK access_key_secret: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo= bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv 1 spec: size: 1x.small 2 storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: loki-s3 type: s3 storageClassName: gp3 3 tenants: mode: openshift-network",
"oc adm groups new cluster-admin",
"oc adm groups add-users cluster-admin <username>",
"oc adm policy add-cluster-role-to-group cluster-admin cluster-admin",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv spec: tenants: mode: openshift-network 1 openshift: adminGroups: 2 - cluster-admin - custom-admin-group 3",
"spec: limits: global: ingestion: ingestionBurstSize: 40 ingestionRate: 20 maxGlobalStreamsPerTenant: 25000 queries: maxChunksPerQuery: 2000000 maxEntriesLimitPerQuery: 10000 maxQuerySeries: 3000",
"oc adm policy add-cluster-role-to-user netobserv-reader <user_group_or_name>",
"oc adm policy add-role-to-user netobserv-metrics-reader <user_group_or_name> -n <namespace>",
"oc adm policy add-cluster-role-to-user netobserv-reader <user_group_or_name>",
"oc adm policy add-cluster-role-to-user cluster-monitoring-view <user_group_or_name>",
"oc adm policy add-cluster-role-to-user netobserv-metrics-reader <user_group_or_name>",
"oc get crd flowcollectors.flows.netobserv.io -ojsonpath='{.status.storedVersions}'",
"apiVersion: migration.k8s.io/v1alpha1 kind: StorageVersionMigration metadata: name: migrate-flowcollector-v1alpha1 spec: resource: group: flows.netobserv.io resource: flowcollectors version: v1alpha1",
"oc apply -f migrate-flowcollector-v1alpha1.yaml",
"oc edit crd flowcollectors.flows.netobserv.io",
"oc get flowcollector cluster -o yaml > flowcollector-1.5.yaml",
"oc get crd flowcollectors.flows.netobserv.io -ojsonpath='{.status.storedVersions}'",
"oc get flowcollector/cluster",
"NAME AGENT SAMPLING (EBPF) DEPLOYMENT MODEL STATUS cluster EBPF 50 DIRECT Ready",
"oc get pods -n netobserv",
"NAME READY STATUS RESTARTS AGE flowlogs-pipeline-56hbp 1/1 Running 0 147m flowlogs-pipeline-9plvv 1/1 Running 0 147m flowlogs-pipeline-h5gkb 1/1 Running 0 147m flowlogs-pipeline-hh6kf 1/1 Running 0 147m flowlogs-pipeline-w7vv5 1/1 Running 0 147m netobserv-plugin-cdd7dc6c-j8ggp 1/1 Running 0 147m",
"oc get pods -n netobserv-privileged",
"NAME READY STATUS RESTARTS AGE netobserv-ebpf-agent-4lpp6 1/1 Running 0 151m netobserv-ebpf-agent-6gbrk 1/1 Running 0 151m netobserv-ebpf-agent-klpl9 1/1 Running 0 151m netobserv-ebpf-agent-vrcnf 1/1 Running 0 151m netobserv-ebpf-agent-xf5jh 1/1 Running 0 151m",
"oc get pods -n openshift-operators-redhat",
"NAME READY STATUS RESTARTS AGE loki-operator-controller-manager-5f6cff4f9d-jq25h 2/2 Running 0 18h lokistack-compactor-0 1/1 Running 0 18h lokistack-distributor-654f87c5bc-qhkhv 1/1 Running 0 18h lokistack-distributor-654f87c5bc-skxgm 1/1 Running 0 18h lokistack-gateway-796dc6ff7-c54gz 2/2 Running 0 18h lokistack-index-gateway-0 1/1 Running 0 18h lokistack-index-gateway-1 1/1 Running 0 18h lokistack-ingester-0 1/1 Running 0 18h lokistack-ingester-1 1/1 Running 0 18h lokistack-ingester-2 1/1 Running 0 18h lokistack-querier-66747dc666-6vh5x 1/1 Running 0 18h lokistack-querier-66747dc666-cjr45 1/1 Running 0 18h lokistack-querier-66747dc666-xh8rq 1/1 Running 0 18h lokistack-query-frontend-85c6db4fbd-b2xfb 1/1 Running 0 18h lokistack-query-frontend-85c6db4fbd-jm94f 1/1 Running 0 18h",
"oc describe flowcollector/cluster",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF 1 ebpf: sampling: 50 2 logLevel: info privileged: false resources: requests: memory: 50Mi cpu: 100m limits: memory: 800Mi processor: 3 logLevel: info resources: requests: memory: 100Mi cpu: 100m limits: memory: 800Mi logTypes: Flows advanced: conversationEndTimeout: 10s conversationHeartbeatInterval: 30s loki: 4 mode: LokiStack 5 consolePlugin: register: true logLevel: info portNaming: enable: true portNames: \"3100\": loki quickFilters: 6 - name: Applications filter: src_namespace!: 'openshift-,netobserv' dst_namespace!: 'openshift-,netobserv' default: true - name: Infrastructure filter: src_namespace: 'openshift-,netobserv' dst_namespace: 'openshift-,netobserv' - name: Pods network filter: src_kind: 'Pod' dst_kind: 'Pod' default: true - name: Services network filter: dst_kind: 'Service'",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: deploymentModel: Kafka 1 kafka: address: \"kafka-cluster-kafka-bootstrap.netobserv\" 2 topic: network-flows 3 tls: enable: false 4",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: exporters: - type: Kafka 1 kafka: address: \"kafka-cluster-kafka-bootstrap.netobserv\" topic: netobserv-flows-export 2 tls: enable: false 3 - type: IPFIX 4 ipfix: targetHost: \"ipfix-collector.ipfix.svc.cluster.local\" targetPort: 4739 transport: tcp or udp 5 - type: OpenTelemetry 6 openTelemetry: targetHost: my-otelcol-collector-headless.otlp.svc targetPort: 4317 type: grpc 7 logs: 8 enable: true metrics: 9 enable: true prefix: netobserv pushTimeInterval: 20s 10 expiryTime: 2m # fieldsMapping: 11 # input: SrcAddr # output: source.address",
"oc patch flowcollector cluster --type=json -p \"[{\"op\": \"replace\", \"path\": \"/spec/agent/ebpf/sampling\", \"value\": <new value>}] -n netobserv\"",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv networkPolicy: enable: true 1 additionalNamespaces: [\"openshift-console\", \"openshift-monitoring\"] 2",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy spec: ingress: - from: - podSelector: {} - namespaceSelector: matchLabels: kubernetes.io/metadata.name: netobserv-privileged - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-console ports: - port: 9001 protocol: TCP - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-monitoring podSelector: {} policyTypes: - Ingress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: netobserv namespace: netobserv-privileged spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-monitoring podSelector: {} policyTypes: - Ingress",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: logTypes: Flows 1 advanced: conversationEndTimeout: 10s 2 conversationHeartbeatInterval: 30s 3",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - PacketDrop 1 privileged: true 2",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - DNSTracking 1 sampling: 1 2",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - FlowRTT 1",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: addZone: true",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: flowFilter: action: Accept 1 cidr: 172.210.150.1/24 2 protocol: SCTP direction: Ingress destPortRange: 80-100 peerIP: 10.10.10.10 enable: true 3",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: flowFilter: action: Accept 1 cidr: 0.0.0.0/0 2 protocol: TCP direction: Egress sourcePort: 100 peerIP: 192.168.127.12 3 enable: true 4",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - PacketTranslation 1",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: agent: type: eBPF ebpf: # sampling: 1 1 privileged: true 2 features: - \"NetworkEvents\"",
"<Dropped_or_Allowed> by <network_event_and_event_name>, direction <Ingress_or_Egress>",
"apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: netobserv-alerts namespace: openshift-monitoring spec: groups: - name: NetObservAlerts rules: - alert: NetObservIncomingBandwidth annotations: message: |- {{ USDlabels.job }}: incoming traffic exceeding 10 MBps for 30s on {{ USDlabels.DstK8S_OwnerType }} {{ USDlabels.DstK8S_OwnerName }} ({{ USDlabels.DstK8S_Namespace }}). summary: \"High incoming traffic.\" expr: sum(rate(netobserv_workload_ingress_bytes_total {SrcK8S_Namespace=\"openshift-ingress\"}[1m])) by (job, DstK8S_Namespace, DstK8S_OwnerName, DstK8S_OwnerType) > 10000000 1 for: 30s labels: severity: warning",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-traffic namespace: netobserv 1 spec: metricName: cluster_external_ingress_bytes_total 2 type: Counter 3 valueField: Bytes direction: Ingress 4 labels: [DstK8S_HostName,DstK8S_Namespace,DstK8S_OwnerName,DstK8S_OwnerType] 5 filters: 6 - field: SrcSubnetLabel matchType: Absence",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-rtt namespace: netobserv 1 spec: metricName: cluster_external_ingress_rtt_seconds type: Histogram 2 valueField: TimeFlowRttNs direction: Ingress labels: [DstK8S_HostName,DstK8S_Namespace,DstK8S_OwnerName,DstK8S_OwnerType] filters: - field: SrcSubnetLabel matchType: Absence - field: TimeFlowRttNs matchType: Presence divider: \"1000000000\" 3 buckets: [\".001\", \".005\", \".01\", \".02\", \".03\", \".04\", \".05\", \".075\", \".1\", \".25\", \"1\"] 4",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: network-policy-events namespace: netobserv spec: metricName: network_policy_events_total type: Counter labels: [NetworkEvents>Type, NetworkEvents>Namespace, NetworkEvents>Name, NetworkEvents>Action, NetworkEvents>Direction] 1 filters: - field: NetworkEvents>Feature value: acl flatten: [NetworkEvents] 2 remap: 3 \"NetworkEvents>Type\": type \"NetworkEvents>Namespace\": namespace \"NetworkEvents>Name\": name \"NetworkEvents>Direction\": direction",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-traffic namespace: netobserv 1 charts: - dashboardName: Main 2 title: External ingress traffic unit: Bps type: SingleStat queries: - promQL: \"sum(rate(USDMETRIC[2m]))\" legend: \"\" - dashboardName: Main 3 sectionName: External title: Top external ingress traffic per workload unit: Bps type: StackArea queries: - promQL: \"sum(rate(USDMETRIC{DstK8S_Namespace!=\\\"\\\"}[2m])) by (DstK8S_Namespace, DstK8S_OwnerName)\" legend: \"{{DstK8S_Namespace}} / {{DstK8S_OwnerName}}\"",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-traffic namespace: netobserv 1 charts: - dashboardName: Main 2 title: External ingress TCP latency unit: seconds type: SingleStat queries: - promQL: \"histogram_quantile(0.99, sum(rate(USDMETRIC_bucket[2m])) by (le)) > 0\" legend: \"p99\" - dashboardName: Main 3 sectionName: External title: \"Top external ingress sRTT per workload, p50 (ms)\" unit: seconds type: Line queries: - promQL: \"histogram_quantile(0.5, sum(rate(USDMETRIC_bucket{DstK8S_Namespace!=\\\"\\\"}[2m])) by (le,DstK8S_Namespace,DstK8S_OwnerName))*1000 > 0\" legend: \"{{DstK8S_Namespace}} / {{DstK8S_OwnerName}}\" - dashboardName: Main 4 sectionName: External title: \"Top external ingress sRTT per workload, p99 (ms)\" unit: seconds type: Line queries: - promQL: \"histogram_quantile(0.99, sum(rate(USDMETRIC_bucket{DstK8S_Namespace!=\\\"\\\"}[2m])) by (le,DstK8S_Namespace,DstK8S_OwnerName))*1000 > 0\" legend: \"{{DstK8S_Namespace}} / {{DstK8S_OwnerName}}\"",
"promQL: \"(sum(rate(USDMETRIC_sum{DstK8S_Namespace!=\\\"\\\"}[2m])) by (DstK8S_Namespace,DstK8S_OwnerName) / sum(rate(USDMETRIC_count{DstK8S_Namespace!=\\\"\\\"}[2m])) by (DstK8S_Namespace,DstK8S_OwnerName))*1000\"",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flows-with-flags-per-destination spec: metricName: flows_with_flags_per_destination_total type: Counter labels: [SrcSubnetLabel,DstSubnetLabel,DstK8S_Name,DstK8S_Type,DstK8S_HostName,DstK8S_Namespace,Flags]",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flows-with-flags-per-source spec: metricName: flows_with_flags_per_source_total type: Counter labels: [DstSubnetLabel,SrcSubnetLabel,SrcK8S_Name,SrcK8S_Type,SrcK8S_HostName,SrcK8S_Namespace,Flags]",
"apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: netobserv-syn-alerts namespace: openshift-monitoring spec: groups: - name: NetObservSYNAlerts rules: - alert: NetObserv-SYNFlood-in annotations: message: |- {{ USDlabels.job }}: incoming SYN-flood attack suspected to Host={{ USDlabels.DstK8S_HostName}}, Namespace={{ USDlabels.DstK8S_Namespace }}, Resource={{ USDlabels.DstK8S_Name }}. This is characterized by a high volume of SYN-only flows with different source IPs and/or ports. summary: \"Incoming SYN-flood\" expr: sum(rate(netobserv_flows_with_flags_per_destination_total{Flags=\"2\"}[1m])) by (job, DstK8S_HostName, DstK8S_Namespace, DstK8S_Name) > 300 1 for: 15s labels: severity: warning app: netobserv - alert: NetObserv-SYNFlood-out annotations: message: |- {{ USDlabels.job }}: outgoing SYN-flood attack suspected from Host={{ USDlabels.SrcK8S_HostName}}, Namespace={{ USDlabels.SrcK8S_Namespace }}, Resource={{ USDlabels.SrcK8S_Name }}. This is characterized by a high volume of SYN-only flows with different source IPs and/or ports. summary: \"Outgoing SYN-flood\" expr: sum(rate(netobserv_flows_with_flags_per_source_total{Flags=\"2\"}[1m])) by (job, SrcK8S_HostName, SrcK8S_Namespace, SrcK8S_Name) > 300 2 for: 15s labels: severity: warning app: netobserv",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: metrics: disableAlerts: [NetObservLokiError, NetObservNoFlows] 1",
"apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: loki-alerts namespace: openshift-monitoring spec: groups: - name: LokiRateLimitAlerts rules: - alert: LokiTenantRateLimit annotations: message: |- {{ USDlabels.job }} {{ USDlabels.route }} is experiencing 429 errors. summary: \"At any number of requests are responded with the rate limit error code.\" expr: sum(irate(loki_request_duration_seconds_count{status_code=\"429\"}[1m])) by (job, namespace, route) / sum(irate(loki_request_duration_seconds_count[1m])) by (job, namespace, route) * 100 > 0 for: 10s labels: severity: warning",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: cacheMaxFlows: 200000 1",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: advanced: scheduling: tolerations: - key: \"<taint key>\" operator: \"Equal\" value: \"<taint value>\" effect: \"<taint effect>\" nodeSelector: <key>: <value> affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: name operator: In values: - app-worker-node priorityClassName: \"\"\"",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: privileged: true 1",
"oc get pod virt-launcher-<vm_name>-<suffix> -n <namespace> -o yaml",
"apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.129.2.39\" ], \"mac\": \"0a:58:0a:81:02:27\", \"default\": true, \"dns\": {} }, { \"name\": \"my-vms/l2-network\", 1 \"interface\": \"podc0f69e19ba2\", 2 \"ips\": [ 3 \"10.10.10.15\" ], \"mac\": \"02:fb:f8:00:00:12\", 4 \"dns\": {} }] name: virt-launcher-fedora-aqua-fowl-13-zr2x9 namespace: my-vms spec: status:",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: agent: ebpf: privileged: true 1 processor: advanced: secondaryNetworks: - index: 2 - MAC 3 name: my-vms/l2-network 4",
"curl -LO https://mirror.openshift.com/pub/cgw/netobserv/latest/oc-netobserv-amd64",
"chmod +x ./oc-netobserv-amd64",
"sudo mv ./oc-netobserv-amd64 /usr/local/bin/oc-netobserv",
"oc netobserv version",
"Netobserv CLI version <version>",
"oc netobserv flows --enable_filter=true --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051",
"live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once",
"{ \"AgentIP\": \"10.0.1.76\", \"Bytes\": 561, \"DnsErrno\": 0, \"Dscp\": 20, \"DstAddr\": \"f904:ece9:ba63:6ac7:8018:1e5:7130:0\", \"DstMac\": \"0A:58:0A:80:00:37\", \"DstPort\": 9999, \"Duplicate\": false, \"Etype\": 2048, \"Flags\": 16, \"FlowDirection\": 0, \"IfDirection\": 0, \"Interface\": \"ens5\", \"K8S_FlowLayer\": \"infra\", \"Packets\": 1, \"Proto\": 6, \"SrcAddr\": \"3e06:6c10:6440:2:a80:37:b756:270f\", \"SrcMac\": \"0A:58:0A:80:00:01\", \"SrcPort\": 46934, \"TimeFlowEndMs\": 1709741962111, \"TimeFlowRttNs\": 121000, \"TimeFlowStartMs\": 1709741962111, \"TimeReceived\": 1709741964 }",
"sqlite3 ./output/flow/<capture_date_time>.db",
"sqlite> SELECT DnsLatencyMs, DnsFlagsResponseCode, DnsId, DstAddr, DstPort, Interface, Proto, SrcAddr, SrcPort, Bytes, Packets FROM flow WHERE DnsLatencyMs >10 LIMIT 10;",
"12|NoError|58747|10.128.0.63|57856||17|172.30.0.10|53|284|1 11|NoError|20486|10.128.0.52|56575||17|169.254.169.254|53|225|1 11|NoError|59544|10.128.0.103|51089||17|172.30.0.10|53|307|1 13|NoError|32519|10.128.0.52|55241||17|169.254.169.254|53|254|1 12|NoError|32519|10.0.0.3|55241||17|169.254.169.254|53|254|1 15|NoError|57673|10.128.0.19|59051||17|172.30.0.10|53|313|1 13|NoError|35652|10.0.0.3|46532||17|169.254.169.254|53|183|1 32|NoError|37326|10.0.0.3|52718||17|169.254.169.254|53|169|1 14|NoError|14530|10.0.0.3|58203||17|169.254.169.254|53|246|1 15|NoError|40548|10.0.0.3|45933||17|169.254.169.254|53|174|1",
"oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051",
"live table filter: [SrcK8S_Zone:us-west-1b] press enter to match multiple regular expressions at once",
"oc netobserv metrics --enable_filter=true --cidr=0.0.0.0/0 --protocol=TCP --port=49051",
"https://console-openshift-console.apps.rosa...openshiftapps.com/monitoring/dashboards/netobserv-cli",
"oc netobserv cleanup",
"oc netobserv [<command>] [<feature_option>] [<command_options>] 1",
"oc netobserv flows [<feature_option>] [<command_options>]",
"oc netobserv flows --enable_pkt_drop --enable_rtt --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051",
"oc netobserv packets [<option>]",
"oc netobserv packets --action=Accept --cidr=0.0.0.0/0 --protocol=TCP --port=49051",
"oc netobserv metrics [<option>]",
"oc netobserv metrics --enable_pkt_drop --protocol=TCP",
"oc adm must-gather --image-stream=openshift/must-gather --image=quay.io/netobserv/must-gather",
"oc -n netobserv get flowcollector cluster -o yaml",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: consolePlugin: register: false",
"oc edit console.operator.openshift.io cluster",
"spec: plugins: - netobserv-plugin",
"oc -n netobserv edit flowcollector cluster -o yaml",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: consolePlugin: register: true",
"oc get pods -n openshift-console -l app=console",
"oc delete pods -n openshift-console -l app=console",
"oc get pods -n netobserv -l app=netobserv-plugin",
"NAME READY STATUS RESTARTS AGE netobserv-plugin-68c7bbb9bb-b69q6 1/1 Running 0 21s",
"oc logs -n netobserv -l app=netobserv-plugin",
"time=\"2022-12-13T12:06:49Z\" level=info msg=\"Starting netobserv-console-plugin [build version: , build date: 2022-10-21 15:15] at log level info\" module=main time=\"2022-12-13T12:06:49Z\" level=info msg=\"listening on https://:9001\" module=server",
"oc delete pods -n netobserv -l app=flowlogs-pipeline-transformer",
"oc edit -n netobserv flowcollector.yaml -o yaml",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: agent: type: EBPF ebpf: interfaces: [ 'br-int', 'br-ex' ] 1",
"oc edit subscription netobserv-operator -n openshift-netobserv-operator",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: netobserv-operator namespace: openshift-netobserv-operator spec: channel: stable config: resources: limits: memory: 800Mi 1 requests: cpu: 100m memory: 100Mi installPlanApproval: Automatic name: netobserv-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: <network_observability_operator_latest_version> 2",
"oc exec deployment/netobserv-plugin -n netobserv -- curl -G -s -H 'X-Scope-OrgID:network' -H 'Authorization: Bearer <api_token>' -k https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network/loki/api/v1/labels | jq",
"oc exec deployment/netobserv-plugin -n netobserv -- curl -G -s -H 'X-Scope-OrgID:network' -H 'Authorization: Bearer <api_token>' -k https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network/loki/api/v1/query --data-urlencode 'query={SrcK8S_Namespace=\"my-namespace\"}' | jq",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv spec: limits: global: ingestion: perStreamRateLimit: 6 1 perStreamRateLimitBurst: 30 2 tenants: mode: openshift-network managementState: Managed"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/network_observability/index |
Part II. Securing Data in Red Hat JBoss Data Grid | Part II. Securing Data in Red Hat JBoss Data Grid In Red Hat JBoss Data Grid, data security can be implemented in the following ways: Role-based Access Control JBoss Data Grid features role-based access control for operations on designated secured caches. Roles can be assigned to users who access your application, with roles mapped to permissions for cache and cache-manager operations. Only authenticated users are able to perform the operations that are authorized for their role. In Library mode, data is secured via role-based access control for CacheManagers and Caches, with authentication delegated to the container or application. In Remote Client-Server mode, JBoss Data Grid is secured by passing identity tokens from the Hot Rod client to the server, and role-based access control of Caches and CacheManagers. Node Authentication and Authorization Node-level security requires new nodes or merging partitions to authenticate before joining a cluster. Only authenticated nodes that are authorized to join the cluster are permitted to do so. This provides data protection by preventing authorized servers from storing your data. Encrypted Communications Within the Cluster JBoss Data Grid increases data security by supporting encrypted communications between the nodes in a cluster by using a user-specified cryptography algorithm, as supported by Java Cryptography Architecture (JCA). JBoss Data Grid also provides audit logging for operations, and the ability to encrypt communication between the Hot Rod Client and Server using Transport Layer Security (TLS/SSL). Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/part-securing_data_in_red_hat_jboss_data_grid |
Chapter 12. Performance and reliability tuning | Chapter 12. Performance and reliability tuning 12.1. Flow control mechanisms If logs are produced faster than they can be collected, it can be difficult to predict or control the volume of logs being sent to an output. Not being able to predict or control the volume of logs being sent to an output can result in logs being lost. If there is a system outage and log buffers are accumulated without user control, this can also cause long recovery times and high latency when the connection is restored. As an administrator, you can limit logging rates by configuring flow control mechanisms for your logging. 12.1.1. Benefits of flow control mechanisms The cost and volume of logging can be predicted more accurately in advance. Noisy containers cannot produce unbounded log traffic that drowns out other containers. Ignoring low-value logs reduces the load on the logging infrastructure. High-value logs can be preferred over low-value logs by assigning higher rate limits. 12.1.2. Configuring rate limits Rate limits are configured per collector, which means that the maximum rate of log collection is the number of collector instances multiplied by the rate limit. Because logs are collected from each node's file system, a collector is deployed on each cluster node. For example, in a 3-node cluster, with a maximum rate limit of 10 records per second per collector, the maximum rate of log collection is 30 records per second. Because the exact byte size of a record as written to an output can vary due to transformations, different encodings, or other factors, rate limits are set in number of records instead of bytes. You can configure rate limits in the ClusterLogForwarder custom resource (CR) in two ways: Output rate limit Limit the rate of outbound logs to selected outputs, for example, to match the network or storage capacity of an output. The output rate limit controls the aggregated per-output rate. Input rate limit Limit the per-container rate of log collection for selected containers. 12.1.3. Configuring log forwarder output rate limits You can limit the rate of outbound logs to a specified output by configuring the ClusterLogForwarder custom resource (CR). Prerequisites You have installed the Red Hat OpenShift Logging Operator. You have administrator permissions. Procedure Add a maxRecordsPerSecond limit value to the ClusterLogForwarder CR for a specified output. The following example shows how to configure a per collector output rate limit for a Kafka broker output named kafka-example : Example ClusterLogForwarder CR apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: # ... outputs: - name: kafka-example 1 type: kafka 2 limit: maxRecordsPerSecond: 1000000 3 # ... 1 The output name. 2 The type of output. 3 The log output rate limit. This value sets the maximum Quantity of logs that can be sent to the Kafka broker per second. This value is not set by default. The default behavior is best effort, and records are dropped if the log forwarder cannot keep up. If this value is 0 , no logs are forwarded. Apply the ClusterLogForwarder CR: Example command USD oc apply -f <filename>.yaml Additional resources Log output types 12.1.4. Configuring log forwarder input rate limits You can limit the rate of incoming logs that are collected by configuring the ClusterLogForwarder custom resource (CR). You can set input limits on a per-container or per-namespace basis. Prerequisites You have installed the Red Hat OpenShift Logging Operator. You have administrator permissions. Procedure Add a maxRecordsPerSecond limit value to the ClusterLogForwarder CR for a specified input. The following examples show how to configure input rate limits for different scenarios: Example ClusterLogForwarder CR that sets a per-container limit for containers with certain labels apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: # ... inputs: - name: <input_name> 1 application: selector: matchLabels: { example: label } 2 containerLimit: maxRecordsPerSecond: 0 3 # ... 1 The input name. 2 A list of labels. If these labels match labels that are applied to a pod, the per-container limit specified in the maxRecordsPerSecond field is applied to those containers. 3 Configures the rate limit. Setting the maxRecordsPerSecond field to 0 means that no logs are collected for the container. Setting the maxRecordsPerSecond field to some other value means that a maximum of that number of records per second are collected for the container. Example ClusterLogForwarder CR that sets a per-container limit for containers in selected namespaces apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: # ... inputs: - name: <input_name> 1 application: namespaces: [ example-ns-1, example-ns-2 ] 2 containerLimit: maxRecordsPerSecond: 10 3 - name: <input_name> application: namespaces: [ test ] containerLimit: maxRecordsPerSecond: 1000 # ... 1 The input name. 2 A list of namespaces. The per-container limit specified in the maxRecordsPerSecond field is applied to all containers in the namespaces listed. 3 Configures the rate limit. Setting the maxRecordsPerSecond field to 10 means that a maximum of 10 records per second are collected for each container in the namespaces listed. Apply the ClusterLogForwarder CR: Example command USD oc apply -f <filename>.yaml 12.2. Filtering logs by content Collecting all logs from a cluster might produce a large amount of data, which can be expensive to transport and store. You can reduce the volume of your log data by filtering out low priority data that does not need to be stored. Logging provides content filters that you can use to reduce the volume of log data. Note Content filters are distinct from input selectors. input selectors select or ignore entire log streams based on source metadata. Content filters edit log streams to remove and modify records based on the record content. Log data volume can be reduced by using one of the following methods: Configuring content filters to drop unwanted log records Configuring content filters to prune log records 12.2.1. Configuring content filters to drop unwanted log records When the drop filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration. Prerequisites You have installed the Red Hat OpenShift Logging Operator. You have administrator permissions. You have created a ClusterLogForwarder custom resource (CR). Procedure Add a configuration for a filter to the filters spec in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to drop log records based on regular expressions: Example ClusterLogForwarder CR apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels."foo-bar/baz" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: "my-pod" 6 pipelines: - name: <pipeline_name> 7 filterRefs: ["<filter_name>"] # ... 1 Specifies the type of filter. The drop filter drops log records that match the filter configuration. 2 Specifies configuration options for applying the drop filter. 3 Specifies the configuration for tests that are used to evaluate whether a log record is dropped. If all the conditions specified for a test are true, the test passes and the log record is dropped. When multiple tests are specified for the drop filter configuration, if any of the tests pass, the record is dropped. If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false. 4 Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores ( a-zA-Z0-9_ ), for example, .kubernetes.namespace_name . If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz" . You can include multiple field paths in a single test configuration, but they must all evaluate to true for the test to pass and the drop filter to be applied. 5 Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. 6 Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. 7 Specifies the pipeline that the drop filter is applied to. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml Additional examples The following additional example shows how you can configure the drop filter to only keep higher priority log records: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: filters: - name: important type: drop drop: test: - field: .message notMatches: "(?i)critical|error" - field: .level matches: "info|warning" # ... In addition to including multiple field paths in a single test configuration, you can also include additional tests that are treated as OR checks. In the following example, records are dropped if either test configuration evaluates to true. However, for the second test configuration, both field specs must be true for it to be evaluated to true: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: filters: - name: important type: drop drop: test: - field: .kubernetes.namespace_name matches: "^open" test: - field: .log_type matches: "application" - field: .kubernetes.pod_name notMatches: "my-pod" # ... 12.2.2. Configuring content filters to prune log records When the prune filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations. Prerequisites You have installed the Red Hat OpenShift Logging Operator. You have administrator permissions. You have created a ClusterLogForwarder custom resource (CR). Procedure Add a configuration for a filter to the prune spec in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to prune log records based on field paths: Important If both are specified, records are pruned based on the notIn array first, which takes precedence over the in array. After records have been pruned by using the notIn array, they are then pruned by using the in array. Example ClusterLogForwarder CR apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,."@timestamp"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: ["<filter_name>"] # ... 1 Specify the type of filter. The prune filter prunes log records by configured fields. 2 Specify configuration options for applying the prune filter. The in and notIn fields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores ( a-zA-Z0-9_ ), for example, .kubernetes.namespace_name . If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz" . 3 Optional: Any fields that are specified in this array are removed from the log record. 4 Optional: Any fields that are not specified in this array are removed from the log record. 5 Specify the pipeline that the prune filter is applied to. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 12.2.3. Additional resources About forwarding logs to third-party systems 12.3. Filtering logs by metadata You can filter logs in the ClusterLogForwarder CR to select or ignore an entire log stream based on the metadata by using the input selector. As an administrator or developer, you can include or exclude the log collection to reduce the memory and CPU load on the collector. Important You can use this feature only if the Vector collector is set up in your logging deployment. Note input spec filtering is different from content filtering. input selectors select or ignore entire log streams based on the source metadata. Content filters edit the log streams to remove and modify the records based on the record content. 12.3.1. Filtering application logs at input by including or excluding the namespace or container name You can include or exclude the application logs based on the namespace and container name by using the input selector. Prerequisites You have installed the Red Hat OpenShift Logging Operator. You have administrator permissions. You have created a ClusterLogForwarder custom resource (CR). Procedure Add a configuration to include or exclude the namespace and container names in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to include or exclude namespaces and container names: Example ClusterLogForwarder CR apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder # ... spec: inputs: - name: mylogs application: includes: - namespace: "my-project" 1 container: "my-container" 2 excludes: - container: "other-container*" 3 namespace: "other-namespace" 4 # ... 1 Specifies that the logs are only collected from these namespaces. 2 Specifies that the logs are only collected from these containers. 3 Specifies the pattern of namespaces to ignore when collecting the logs. 4 Specifies the set of containers to ignore when collecting the logs. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml The excludes option takes precedence over includes . 12.3.2. Filtering application logs at input by including either the label expressions or matching label key and values You can include the application logs based on the label expressions or a matching label key and its values by using the input selector. Prerequisites You have installed the Red Hat OpenShift Logging Operator. You have administrator permissions. You have created a ClusterLogForwarder custom resource (CR). Procedure Add a configuration for a filter to the input spec in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to include logs based on label expressions or matched label key/values: Example ClusterLogForwarder CR apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder # ... spec: inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: ["prod", "qa"] 3 - key: zone operator: NotIn values: ["east", "west"] matchLabels: 4 app: one name: app1 # ... 1 Specifies the label key to match. 2 Specifies the operator. Valid values include: In , NotIn , Exists , and DoesNotExist . 3 Specifies an array of string values. If the operator value is either Exists or DoesNotExist , the value array must be empty. 4 Specifies an exact key or value mapping. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 12.3.3. Filtering the audit and infrastructure log inputs by source You can define the list of audit and infrastructure sources to collect the logs by using the input selector. Prerequisites You have installed the Red Hat OpenShift Logging Operator. You have administrator permissions. You have created a ClusterLogForwarder custom resource (CR). Procedure Add a configuration to define the audit and infrastructure sources in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to define aduit and infrastructure sources: Example ClusterLogForwarder CR apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder # ... spec: inputs: - name: mylogs1 infrastructure: sources: 1 - node - name: mylogs2 audit: sources: 2 - kubeAPI - openshiftAPI - ovn # ... 1 Specifies the list of infrastructure sources to collect. The valid sources include: node : Journal log from the node container : Logs from the workloads deployed in the namespaces 2 Specifies the list of audit sources to collect. The valid sources include: kubeAPI : Logs from the Kubernetes API servers openshiftAPI : Logs from the OpenShift API servers auditd : Logs from a node auditd service ovn : Logs from an open virtual network service Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml | [
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: outputs: - name: kafka-example 1 type: kafka 2 limit: maxRecordsPerSecond: 1000000 3",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: inputs: - name: <input_name> 1 application: selector: matchLabels: { example: label } 2 containerLimit: maxRecordsPerSecond: 0 3",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: inputs: - name: <input_name> 1 application: namespaces: [ example-ns-1, example-ns-2 ] 2 containerLimit: maxRecordsPerSecond: 10 3 - name: <input_name> application: namespaces: [ test ] containerLimit: maxRecordsPerSecond: 1000",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels.\"foo-bar/baz\" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: \"my-pod\" 6 pipelines: - name: <pipeline_name> 7 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: filters: - name: important type: drop drop: test: - field: .message notMatches: \"(?i)critical|error\" - field: .level matches: \"info|warning\"",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: filters: - name: important type: drop drop: test: - field: .kubernetes.namespace_name matches: \"^open\" test: - field: .log_type matches: \"application\" - field: .kubernetes.pod_name notMatches: \"my-pod\"",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,.\"@timestamp\"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: mylogs application: includes: - namespace: \"my-project\" 1 container: \"my-container\" 2 excludes: - container: \"other-container*\" 3 namespace: \"other-namespace\" 4",
"oc apply -f <filename>.yaml",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: [\"prod\", \"qa\"] 3 - key: zone operator: NotIn values: [\"east\", \"west\"] matchLabels: 4 app: one name: app1",
"oc apply -f <filename>.yaml",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder spec: inputs: - name: mylogs1 infrastructure: sources: 1 - node - name: mylogs2 audit: sources: 2 - kubeAPI - openshiftAPI - ovn",
"oc apply -f <filename>.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/logging/performance-and-reliability-tuning |
22.16.11. Configuring the iburst Option | 22.16.11. Configuring the iburst Option To improve the time taken for initial synchronization, add the following option to the end of a server command: iburst When the server is unreachable, send a burst of eight packets instead of the usual one packet. The packet spacing is normally 2 s; however, the spacing between the first and second packets can be changed with the calldelay command to allow additional time for a modem or ISDN call to complete. For use with the server command to reduce the time taken for initial synchronization. As of Red Hat Enterprise Linux 6.5, this is now a default option in the configuration file. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2_configuring_the_iburst_option |
Chapter 10. ImageTag [image.openshift.io/v1] | Chapter 10. ImageTag [image.openshift.io/v1] Description ImageTag represents a single tag within an image stream and includes the spec, the status history, and the currently referenced image (if any) of the provided tag. This type replaces the ImageStreamTag by providing a full view of the tag. ImageTags are returned for every spec or status tag present on the image stream. If no tag exists in either form a not found error will be returned by the API. A create operation will succeed if no spec tag has already been defined and the spec field is set. Delete will remove both spec and status elements from the image stream. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec status image 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources image object Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object TagReference specifies optional annotations for images using this tag and an optional reference to an ImageStreamTag, ImageStreamImage, or DockerImage this tag should track. status object NamedTagEventList relates a tag to its image history. 10.1.1. .image Description Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dockerImageConfig string DockerImageConfig is a JSON blob that the runtime uses to set up the container. This is a part of manifest schema v2. Will not be set when the image represents a manifest list. dockerImageLayers array DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. dockerImageLayers[] object ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. dockerImageManifest string DockerImageManifest is the raw JSON of the manifest dockerImageManifestMediaType string DockerImageManifestMediaType specifies the mediaType of manifest. This is a part of manifest schema v2. dockerImageManifests array DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. dockerImageManifests[] object ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. dockerImageMetadata RawExtension DockerImageMetadata contains metadata about this image dockerImageMetadataVersion string DockerImageMetadataVersion conveys the version of the object, which if empty defaults to "1.0" dockerImageReference string DockerImageReference is the string that can be used to pull this image. dockerImageSignatures array (string) DockerImageSignatures provides the signatures as opaque blobs. This is a part of manifest schema v1. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signatures array Signatures holds all signatures of the image. signatures[] object ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). 10.1.2. .image.dockerImageLayers Description DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. Type array 10.1.3. .image.dockerImageLayers[] Description ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. Type object Required name size mediaType Property Type Description mediaType string MediaType of the referenced object. name string Name of the layer as defined by the underlying store. size integer Size of the layer in bytes as defined by the underlying store. 10.1.4. .image.dockerImageManifests Description DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. Type array 10.1.5. .image.dockerImageManifests[] Description ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. Type object Required digest mediaType manifestSize architecture os Property Type Description architecture string Architecture specifies the supported CPU architecture, for example amd64 or ppc64le . digest string Digest is the unique identifier for the manifest. It refers to an Image object. manifestSize integer ManifestSize represents the size of the raw object contents, in bytes. mediaType string MediaType defines the type of the manifest, possible values are application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json or application/vnd.docker.distribution.manifest.v1+json. os string OS specifies the operating system, for example linux . variant string Variant is an optional field repreenting a variant of the CPU, for example v6 to specify a particular CPU variant of the ARM CPU. 10.1.6. .image.signatures Description Signatures holds all signatures of the image. Type array 10.1.7. .image.signatures[] Description ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required type content Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array Conditions represent the latest available observations of a signature's current state. conditions[] object SignatureCondition describes an image signature condition of particular kind at particular probe time. content string Required: An opaque binary string which is an image's signature. created Time If specified, it is the time of signature's creation. imageIdentity string A human readable string representing image's identity. It could be a product name and version, or an image pull spec (e.g. "registry.access.redhat.com/rhel7/rhel:7.2"). issuedBy object SignatureIssuer holds information about an issuer of signing certificate or key. issuedTo object SignatureSubject holds information about a person or entity who created the signature. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signedClaims object (string) Contains claims from the signature. type string Required: Describes a type of stored blob. 10.1.8. .image.signatures[].conditions Description Conditions represent the latest available observations of a signature's current state. Type array 10.1.9. .image.signatures[].conditions[] Description SignatureCondition describes an image signature condition of particular kind at particular probe time. Type object Required type status Property Type Description lastProbeTime Time Last time the condition was checked. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of signature condition, Complete or Failed. 10.1.10. .image.signatures[].issuedBy Description SignatureIssuer holds information about an issuer of signing certificate or key. Type object Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. 10.1.11. .image.signatures[].issuedTo Description SignatureSubject holds information about a person or entity who created the signature. Type object Required publicKeyID Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. publicKeyID string If present, it is a human readable key id of public key belonging to the subject used to verify image signature. It should contain at least 64 lowest bits of public key's fingerprint (e.g. 0x685ebe62bf278440). 10.1.12. .spec Description TagReference specifies optional annotations for images using this tag and an optional reference to an ImageStreamTag, ImageStreamImage, or DockerImage this tag should track. Type object Required name Property Type Description annotations object (string) Optional; if specified, annotations that are applied to images retrieved via ImageStreamTags. from ObjectReference Optional; if specified, a reference to another image that this tag should point to. Valid values are ImageStreamTag, ImageStreamImage, and DockerImage. ImageStreamTag references can only reference a tag within this same ImageStream. generation integer Generation is a counter that tracks mutations to the spec tag (user intent). When a tag reference is changed the generation is set to match the current stream generation (which is incremented every time spec is changed). Other processes in the system like the image importer observe that the generation of spec tag is newer than the generation recorded in the status and use that as a trigger to import the newest remote tag. To trigger a new import, clients may set this value to zero which will reset the generation to the latest stream generation. Legacy clients will send this value as nil which will be merged with the current tag generation. importPolicy object TagImportPolicy controls how images related to this tag will be imported. name string Name of the tag reference boolean Reference states if the tag will be imported. Default value is false, which means the tag will be imported. referencePolicy object TagReferencePolicy describes how pull-specs for images in this image stream tag are generated when image change triggers in deployment configs or builds are resolved. This allows the image stream author to control how images are accessed. 10.1.13. .spec.importPolicy Description TagImportPolicy controls how images related to this tag will be imported. Type object Property Type Description importMode string ImportMode describes how to import an image manifest. insecure boolean Insecure is true if the server may bypass certificate verification or connect directly over HTTP during image import. scheduled boolean Scheduled indicates to the server that this tag should be periodically checked to ensure it is up to date, and imported 10.1.14. .spec.referencePolicy Description TagReferencePolicy describes how pull-specs for images in this image stream tag are generated when image change triggers in deployment configs or builds are resolved. This allows the image stream author to control how images are accessed. Type object Required type Property Type Description type string Type determines how the image pull spec should be transformed when the image stream tag is used in deployment config triggers or new builds. The default value is Source , indicating the original location of the image should be used (if imported). The user may also specify Local , indicating that the pull spec should point to the integrated container image registry and leverage the registry's ability to proxy the pull to an upstream registry. Local allows the credentials used to pull this image to be managed from the image stream's namespace, so others on the platform can access a remote image but have no access to the remote secret. It also allows the image layers to be mirrored into the local registry which the images can still be pulled even if the upstream registry is unavailable. 10.1.15. .status Description NamedTagEventList relates a tag to its image history. Type object Required tag items Property Type Description conditions array Conditions is an array of conditions that apply to the tag event list. conditions[] object TagEventCondition contains condition information for a tag event. items array Standard object's metadata. items[] object TagEvent is used by ImageStreamStatus to keep a historical record of images associated with a tag. tag string Tag is the tag for which the history is recorded 10.1.16. .status.conditions Description Conditions is an array of conditions that apply to the tag event list. Type array 10.1.17. .status.conditions[] Description TagEventCondition contains condition information for a tag event. Type object Required type status generation Property Type Description generation integer Generation is the spec tag generation that this status corresponds to lastTransitionTime Time LastTransitionTIme is the time the condition transitioned from one status to another. message string Message is a human readable description of the details about last transition, complementing reason. reason string Reason is a brief machine readable explanation for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of tag event condition, currently only ImportSuccess 10.1.18. .status.items Description Standard object's metadata. Type array 10.1.19. .status.items[] Description TagEvent is used by ImageStreamStatus to keep a historical record of images associated with a tag. Type object Required created dockerImageReference image generation Property Type Description created Time Created holds the time the TagEvent was created dockerImageReference string DockerImageReference is the string that can be used to pull this image generation integer Generation is the spec tag generation that resulted in this tag being updated image string Image is the image 10.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/imagetags GET : list objects of kind ImageTag /apis/image.openshift.io/v1/namespaces/{namespace}/imagetags GET : list objects of kind ImageTag POST : create an ImageTag /apis/image.openshift.io/v1/namespaces/{namespace}/imagetags/{name} DELETE : delete an ImageTag GET : read the specified ImageTag PATCH : partially update the specified ImageTag PUT : replace the specified ImageTag 10.2.1. /apis/image.openshift.io/v1/imagetags Table 10.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind ImageTag Table 10.2. HTTP responses HTTP code Reponse body 200 - OK ImageTagList schema 401 - Unauthorized Empty 10.2.2. /apis/image.openshift.io/v1/namespaces/{namespace}/imagetags Table 10.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 10.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description list objects of kind ImageTag Table 10.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 10.6. HTTP responses HTTP code Reponse body 200 - OK ImageTagList schema 401 - Unauthorized Empty HTTP method POST Description create an ImageTag Table 10.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.8. Body parameters Parameter Type Description body ImageTag schema Table 10.9. HTTP responses HTTP code Reponse body 200 - OK ImageTag schema 201 - Created ImageTag schema 202 - Accepted ImageTag schema 401 - Unauthorized Empty 10.2.3. /apis/image.openshift.io/v1/namespaces/{namespace}/imagetags/{name} Table 10.10. Global path parameters Parameter Type Description name string name of the ImageTag namespace string object name and auth scope, such as for teams and projects Table 10.11. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an ImageTag Table 10.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 10.13. Body parameters Parameter Type Description body DeleteOptions schema Table 10.14. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ImageTag Table 10.15. HTTP responses HTTP code Reponse body 200 - OK ImageTag schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ImageTag Table 10.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 10.17. Body parameters Parameter Type Description body Patch schema Table 10.18. HTTP responses HTTP code Reponse body 200 - OK ImageTag schema 201 - Created ImageTag schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ImageTag Table 10.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.20. Body parameters Parameter Type Description body ImageTag schema Table 10.21. HTTP responses HTTP code Reponse body 200 - OK ImageTag schema 201 - Created ImageTag schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/image_apis/imagetag-image-openshift-io-v1 |
Chapter 4. Verifying OpenShift Data Foundation deployment | Chapter 4. Verifying OpenShift Data Foundation deployment Use this section to verify that OpenShift Data Foundation is deployed correctly. 4.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (1 pod on any storage node) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 4.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 4.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 4.4. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io ocs-storagecluster-ceph-rgw | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_on_vmware_vsphere/verifying_openshift_data_foundation_deployment |
Preface | Preface Preface | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_the_streams_for_apache_kafka_bridge/preface |
Chapter 2. Distributed tracing architecture | Chapter 2. Distributed tracing architecture 2.1. Distributed tracing architecture Every time a user takes an action in an application, a request is executed by the architecture that may require dozens of different services to participate to produce a response. Red Hat OpenShift distributed tracing lets you perform distributed tracing, which records the path of a request through various microservices that make up an application. Distributed tracing is a technique that is used to tie the information about different units of work together - usually executed in different processes or hosts - to understand a whole chain of events in a distributed transaction. Developers can visualize call flows in large microservice architectures with distributed tracing. It is valuable for understanding serialization, parallelism, and sources of latency. Red Hat OpenShift distributed tracing records the execution of individual requests across the whole stack of microservices, and presents them as traces. A trace is a data/execution path through the system. An end-to-end trace is comprised of one or more spans. A span represents a logical unit of work in Red Hat OpenShift distributed tracing that has an operation name, the start time of the operation, and the duration, as well as potentially tags and logs. Spans may be nested and ordered to model causal relationships. 2.1.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use distributed tracing for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With distributed tracing you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis Red Hat OpenShift distributed tracing consists of two main components: Red Hat OpenShift distributed tracing platform - This component is based on the open source Jaeger project . Red Hat OpenShift distributed tracing data collection - This component is based on the open source OpenTelemetry project . Both of these components are based on the vendor-neutral OpenTracing APIs and instrumentation. 2.1.2. Red Hat OpenShift distributed tracing features Red Hat OpenShift distributed tracing provides the following capabilities: Integration with Kiali - When properly configured, you can view distributed tracing data from the Kiali console. High scalability - The distributed tracing back end is designed to have no single points of failure and to scale with the business needs. Distributed Context Propagation - Enables you to connect data from different components together to create a complete end-to-end trace. Backwards compatibility with Zipkin - Red Hat OpenShift distributed tracing has APIs that enable it to be used as a drop-in replacement for Zipkin, but Red Hat is not supporting Zipkin compatibility in this release. 2.1.3. Red Hat OpenShift distributed tracing architecture Red Hat OpenShift distributed tracing is made up of several components that work together to collect, store, and display tracing data. Red Hat OpenShift distributed tracing platform - This component is based on the open source Jaeger project . Client (Jaeger client, Tracer, Reporter, instrumented application, client libraries)- The distributed tracing platform clients are language-specific implementations of the OpenTracing API. They can be used to instrument applications for distributed tracing either manually or with a variety of existing open source frameworks, such as Camel (Fuse), Spring Boot (RHOAR), MicroProfile (RHOAR/Thorntail), Wildfly (EAP), and many more, that are already integrated with OpenTracing. Agent (Jaeger agent, Server Queue, Processor Workers) - The distributed tracing platform agent is a network daemon that listens for spans sent over User Datagram Protocol (UDP), which it batches and sends to the Collector. The agent is meant to be placed on the same host as the instrumented application. This is typically accomplished by having a sidecar in container environments such as Kubernetes. Jaeger Collector (Collector, Queue, Workers) - Similar to the Jaeger agent, the Jaeger Collector receives spans and places them in an internal queue for processing. This allows the Jaeger Collector to return immediately to the client/agent instead of waiting for the span to make its way to the storage. Storage (Data Store) - Collectors require a persistent storage backend. Red Hat OpenShift distributed tracing platform has a pluggable mechanism for span storage. Note that for this release, the only supported storage is Elasticsearch. Query (Query Service) - Query is a service that retrieves traces from storage. Ingester (Ingester Service) - Red Hat OpenShift distributed tracing can use Apache Kafka as a buffer between the Collector and the actual Elasticsearch backing storage. Ingester is a service that reads data from Kafka and writes to the Elasticsearch storage backend. Jaeger Console - With the Red Hat OpenShift distributed tracing platform user interface, you can visualize your distributed tracing data. On the Search page, you can find traces and explore details of the spans that make up an individual trace. Red Hat OpenShift distributed tracing data collection - This component is based on the open source OpenTelemetry project . OpenTelemetry Collector - The OpenTelemetry Collector is a vendor-agnostic way to receive, process, and export telemetry data. The OpenTelemetry Collector supports open-source observability data formats, for example, Jaeger and Prometheus, sending to one or more open-source or commercial back-ends. The Collector is the default location instrumentation libraries export their telemetry data. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/distributed_tracing/distributed-tracing-architecture |
32.8.2. Registering and Then Mounting an NFS Share | 32.8.2. Registering and Then Mounting an NFS Share Register the system to a Red Hat Subscription Management server (in this example, a local Subscription Asset Manager server): Run a script named runme from an NFS share: NFS file locking is not supported while in kickstart mode, therefore -o nolock is required when mounting an NFS mount. | [
"%post --log=/root/ks-post.log /usr/sbin/subscription-manager register [email protected] --password=secret --serverurl=sam-server.example.com --org=\"Admin Group\" --environment=\"Dev\" %end",
"mkdir /mnt/temp mount -o nolock 10.10.0.2:/usr/new-machines /mnt/temp openvt -s -w -- /mnt/temp/runme umount /mnt/temp"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sect-kickstart-example-register-nfs |
Chapter 2. Installing and Running the CLI | Chapter 2. Installing and Running the CLI 2.1. Installing the CLI You can install the CLI on Linux, Windows, or macOS operating systems using the downloadable .zip file. Prerequisites Red Hat Container Registry Authentication for registry.redhat.io . Red Hat distributes container images from registry.redhat.io , which requires authentication. For more details, see Red Hat Container Registry Authentication . 2.1.1. Installing the CLI .zip file Procedure Navigate to the MTA Download page and download the OS-specific CLI file or the src file: mta-7.1.1-cli-linux.zip mta-7.1.1-cli-macos.zip mta-7.1.1-cli-windows.zip mta-7.1.1-cli-src.zip Extract the .zip file to a directory of your choice. The .zip file extracts a single binary, called mta-cli . When you encounter <MTA_HOME> in this guide, replace it with the actual path to your MTA installation. 2.1.2. Installing the CLI by using Podman You can install the CLI using podman pull . Prerequisites Red Hat Container Registry Authentication for registry.redhat.io . Red Hat distributes container images from registry.redhat.io , which requires authentication. See Red Hat Container Registry Authentication for additional details. Podman must be installed. Podman Podman is a daemonless, open source, Linux-native tool designed to make it easy to find, run, build, share, and deploy applications using Open Containers Initiative (OCI) Containers and Container Images. Podman provides a command-line interface (CLI) familiar to anyone who has used the Docker Container Engine. For more information on installing and using Podman, see Podman installation instructions . Procedure Use Podman to authenticate to registry.redhat.io by running the following command: USD podman login registry.redhat.io Enter the user name and password: Username: <username> Password: <***********> Copy the binary PATH to enable system-wide use by running the following command: USD podman cp USD(podman create registry.redhat.com/mta-toolkit/mta-mta-cli-rhel9:{ProductVersion}):/usr/local/bin/mta-cli ./ Warning Although installation using Podman is possible, downloading and installing the .zip file is the preferred installation. 2.1.3. Installing the CLI for use with Docker on Windows (Developer Preview) You can install the CLI for use with Docker on Windows. This is the required approach when migrating applications built with .NET framework 4.5 or later on Windows to cross-platform .NET 8.0. Prerequisites A host with Windows 11+ 64-bit version 21H2 or higher. You have download the Docker Desktop for Windows installer. See Install Docker Desktop on Windows for additional details. Procedure Open a PowerShell with Administrator privileges. Ensure Hyper-V is installed and enabled: PS C:\Users\<your_user_name>> Enable-WindowsOptionalFeature -Online ` -FeatureName Microsoft-Hyper-V-All PS C:\Users\<your_user_name>> Enable-WindowsOptionalFeature -Online ` -FeatureName Containers Note You may need to reboot Windows. Install Docker Desktop on Windows. Double-click Docker_Desktop_Installer.exe to run the installer. By default, Docker Desktop is installed at C:\Program Files\Docker\Docker . Deselect the Use WSL 2 instead of Hyper-V option on the Configuration page to ensure that Docker will run Windows containers as the backend instead of Linux containers. In PowerShell, create a folder for MTA: PS C:\Users\<your_user_name>> mkdir C:\Users\<your_user_name>\MTA Replace <your_user_name> with the username for your home directory. Extract the mta-7.1.1-cli-windows.zip file to the MTA folder: PS C:\Users\<your_user_name>> cd C:\Users\<your_user_name>\Downloads Replace <your_user_name> with the username for your home directory. PS C:\Users\<your_user_name>> Expand-Archive ` -Path "{ProductShortNameLower}-{ProductVersion}-cli-windows.zip" ` -DestinationPath "C:\Users\<your_user_name>\MTA" Replace <your_user_name> with the username for your home directory. Ensure Docker is running Windows containers: PS C:\Users\<your_user_name>> docker version Client: Version: 27.0.3 API version: 1.46 Go version: go1.21.11 Git commit: 7d4bcd8 Built: Sat Jun 29 00:03:32 2024 OS/Arch: windows/amd64 1 Context: desktop-windows Server: Docker Desktop 4.32.0 (157355) Engine: Version: 27.0.3 API version: 1.46 (minimum version 1.24) Go version: go1.21.11 Git commit: 662f78c Built: Sat Jun 29 00:02:13 2024 OS/Arch: windows/amd64 2 Experimental: false 1 2 Ensure the OS/Arch setting is windows/amd64 . Set the PODMAN_BIN environment variable to use Docker: PS C:\Users\<your_user_name>> USDenv:PODMAN_BIN="C:\Windows\system32\docker.exe" Set the DOTNET_PROVIDER_IMG environment variable to use the upstream dotnet-external-provider : PS C:\Users\<your_user_name>> USDenv:DOTNET_PROVIDER_IMG="quay.io/konveyor/dotnet-external-provider:v0.5.0" Set the RUNNER_IMG environment variable to use the upstream image: PS C:\Users\<your_user_name>> USDenv:RUNNER_IMG="quay.io/konveyor/kantra:v0.5.0" 2.2. Installing MTA on a disconnected environment On a connected device, first download and save the MTA binary. Then download and save the Podman images, the MTA CLI image and the provider image that you need. Download the required MTA CLI binary from the Migration Toolkit for Applications Red Hat Developer page : CLI for Linux x86_64 CLI for Linux aarch64 CLI for MacOS x86_64 CLI for MacOS aarch64 CLI for Windows x86_64 CLI for Windows aarch64 On a connected device, download and save the images. Copy the binary to the disconnected device. In addition, you must save and import the associated container images by using Podman. 2.2.1. Downloading the Podman images Prerequisites Podman installed. For more information, see Podman . Procedure Use Podman to authenticate to registry.redhat.io : USD podman login registry.redhat.io Enter your username and then your password for registry.redhat.io: Username: <registry_service_account_username> Password: <registry_service_account_password> You should see the following output: Login Succeeded! Use Podman to pull the image from the registry: USD podman pull registry.redhat.io/mta/mta-cli-rhel9:7.1.0 Use Podman to pull the provider image that you need from the registry: For Java, run: USD podman pull registry.redhat.io/mta/mta-java-external-provider-rhel9:7.1.0 For .NET, run: USD podman pull registry.redhat.io/mta/mta-dotnet-external-provider-rhel9:7.1.0 Save the images: USD podman save <image> -o <my_image.image> Copy the .image file and the binary onto a USB or directly to the file system of the disconnected device. On the disconnected device, run USD podman load --input <my_image.image> 2.2.2. CLI known issues Limitations with Podman on Microsoft Windows The CLI is built and distributed with support for Microsoft Windows. However, when running any container image based on Red Hat Enterprise Linux 9 (RHEL9) or Universal Base Image 9 (UBI9), the following error can be returned when starting the container: Fatal glibc error: CPU does not support x86-64-v2 This error is caused because Red Hat Enterprise Linux 9 or Universal Base Image 9 container images must be run on a CPU architecture that supports x86-64-v2 . For more details, see (Running Red Hat Enterprise Linux 9 (RHEL) or Universal Base Image (UBI) 9 container images fail with "Fatal glibc error: CPU does not support x86-64-v2") . CLI runs the container runtime correctly. However, different container runtime configurations are not supported. Although unsupported, you can run CLI with Docker instead of Podman , which would resolve this issue. To achieve this, you replace the PODMAN_BIN path with the path to Docker. For example, if you experience this issue, instead of issuing: PODMAN_BIN=/usr/local/bin/docker mta-cli analyze You replace PODMAN_BIN with the path to Docker: <Docker Root Dir>=/usr/local/bin/docker mta-cli analyze While this is not supported, it would allow you to explore CLI while you work to upgrade your hardware or move to hardware that supports x86_64-v2 . 2.3. Running the CLI You can run the Migration Toolkit for Applications (MTA) CLI against one or more applications. Before MTA 7.1.0, if you wanted to run the CLI against multiple applications, you ran a series of --analyze commands, each against an application, and each generating a separate report. This option, which is still fully supported, is described in Running the MTA CLI against an application . In MTA 7.1.0 and later, you can run the CLI against multiple applications by using the --bulk option, to generate a single report. This option, which is presented as a Developer Preview, is described in Running the MTA CLI against multiple applications and generating a single report (Developer Preview) . Important Running the CLI against one or more applications is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA. 2.3.1. Running the MTA CLI against an application You can run the Migration Toolkit for Applications (MTA) CLI against an application. Procedure Open a terminal and navigate to the <MTA_HOME>/ directory. Run the mta-cli script, or mta-cli.exe for Windows, and specify the appropriate arguments: USD ./mta-cli analyze --input <path_to_input> \ --output <path_to_output> --source <source_name> --target <target_source> \ --input : The application to be evaluated. --output : The output directory for the generated reports. --source : The source technology for the application migration. For example, weblogic . --target : The target technology for the application migration. For example, eap8 . Access the report. 2.3.1.1. MTA command examples Running MTA on an application archive The following command analyzes the example EAR archive named jee-example-app-1.0.0.ear for migrating from JBoss EAP 5 to JBoss EAP 7: USD <MTA_HOME>/mta-cli analyze \ --input <path_to_jee-example-app-1.0.0.ear> \ --output <path_to_report_output> --source eap5 --target eap7 \ Running MTA on source code The following command analyzes the source code of an example application called customer-management for migrating to JBoss EAP 8. USD <MTA_HOME>/mta-cli analyze --mode source-only --input <path_to_customer-management> --output <path_to_report_output> --target eap8 Running cloud-readiness rules The following command analyzes the example EAR archive named jee-example-app-1.0.0.ear for migrating to JBoss EAP 7. It also evaluates the archive for cloud readiness: USD <MTA_HOME>/mta-cli analyze --input <path_to_jee-example-app-1.0.0.ear> \ --output <path_to_report_output> \ --target eap7 2.3.2. Running the MTA CLI against multiple applications and generating a single report (Developer Preview) You can now run the Migration Toolkit for Applications (MTA) CLI against multiple applications and generate a combined report. This can save you time and give you a better idea of how to prepare a set of applications for migration. This feature is currently a Developer Preview feature. Important Running the CLI against one or more applications is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA. Procedure Open a terminal and navigate to the <MTA_HOME>/ directory. Run the mta-cli script, or mta-cli.exe for Windows, and specify the appropriate arguments, entering one input per analyze command, but entering the same output directory for all inputs. For example, to analyze applications A, B, and C: Enter the following command for input A: USD ./{mta-cli} analyze --bulk --input=<path_to_input_A> --output=<path_to_output_ABC> --source <source_A> --target <target_A> --input : The application to be evaluated. --output : The output directory for the generated reports. --source : The source technology for the application migration. For example, weblogic . --target : The target technology for the application migration. For example, eap8 . Enter the following command for input B: USD ./{mta-cli} analyze --bulk --input=<path_to_input_B> --output=<path_to_output_ABC> --source <source_B> --target <target_B> Enter the following command for input C: USD ./{mta-cli} analyze --bulk --input=<path_to_input_C> --output=<path_to_output_ABC> --source <source_C> --target <target_C> MTA generates a single report, listing all issues that need to be resolved before the applications can be migrated. Access the report. 2.3.3. Performing analysis using the command line Analyze supports running source code and binary analysis by using the analyzer-lsp tool. analyzer-lsp evaluates the rules for the providers and determines rule matches. To run analysis on application source code, run the following command: mta-cli analyze --input=<path_to_source_code> --output=<path_to_output_directory> All flags: Analyze application source code Usage: mta-cli analyze [flags] Flags: --analyze-known-libraries Analyze known open-source libraries. --context-lines (int) Number of lines of source code to include in the output for each incident (default: `100`). -d, --dependency-folders (stringArray) Directory for dependencies. --enable-default-rulesets Run default rulesets with analysis (default: `true`). -h, --help Help for analyze. --http-proxy (string) HTTP proxy string URL. --https-proxy (string) HTTPS proxy string URL. --incident-selector (string) An expression to select incidents based on custom variables. Example: !package=io.demo.config-utils -i, --input (string) Path to application source code or a binary. --jaeger-endpoint (string) Jaeger endpoint to collect traces. --json-output Create analysis and dependency output as JSON. --list-sources List rules for available migration sources. --list-targets List rules for available migration targets. -l, --label-selector (string) Run rules based on specified label selector expression. --maven-settings (string) Path to the custom maven settings file to use. --overwrite Overwrite output directory. --skip-static-report Do not generate the static report. -m, --mode (string) Analysis mode, must be one of `full` or `source-only` (default: `full`). --no-proxy (string) Proxy-excluded URLs (relevant only with proxy). -o, --output (string) Path to the directory for analysis output. --overwrite Overwrite output directory. --rules (stringArray) Filename or directory containing rule files. --skip-static-report Do not generate the static report. -s, --source (string) Source technology to consider for analysis. To specify multiple sources, repeat the parameter: `--source <source_1> --source <source_2>` etc. -t, --target (string) Target technology to consider for analysis. To specify multiple targets, repeat the parameter: `--target <target_1> --target <target_2>` etc. Global Flags: --log-level uint32 Log level (default: 4). --no-cleanup Do not cleanup temporary resources. Note The list of flags above does not include the --bulk flag because this flag is only offered as part of a Developer Preview feature. That feature is described in Support for providing a single report when analyzing multiple applications on the CLI . Usage example Get an example application to run analysis on. List available target technologies. mta-cli analyze --list-targets Run an analysis with a specified target technology, for example cloud-readiness . mta-cli analyze --input=<path-to/example-applications/example-1> --output=<path-to-output-dir> --target=cloud-readiness Several analysis reports are created in your specified output path: USD ls ./output/ -1 analysis.log dependencies.yaml dependency.log output.yaml static-report output.yaml is the file that contains the issues report. static-report contains the static HTML report. dependencies.yaml contains the dependencies report. 2.3.4. Performing transformation by using the command line You can use transformation to perform the following actions: Transform Java applications source code by using the transform openrewrite command. Convert XML rules to YAML rules by using the transform rules command. Important Performing transformation requires the container runtime to be configured. For more information, see Installing the CLI by using Podman . Transform application source code or mta XML rules Usage: mta-cli transform [flags] mta-cli transform [command] Available Commands: openrewrite Transform application source code using OpenRewrite recipes rules Convert XML rules to YAML Flags: -h, --help help for transform Global Flags: --log-level uint32 log level (default 4) --no-cleanup do not clean up temporary resources Use "mta-cli transform [command] --help" for more information about a command. 2.3.4.1. OpenRewrite The openrewrite subcommand allows running OpenRewrite recipes on source code. Transform application source code using OpenRewrite recipes Usage: mta-cli transform openrewrite [flags] Flags: -g, --goal string target goal (default "dryRun") -h, --help help for openrewrite -i, --input string path to application source code directory -l, --list-targets list all available OpenRewrite recipes -s, --maven-settings string path to a custom maven settings file to use -t, --target string target openrewrite recipe to use. Run --list-targets to get a list of packaged recipes. Global Flags: --log-level uint32 log level (default 4) --no-cleanup do not clean up temporary resources To run transform openrewrite on application source code, run the following command: mta-cli transform openrewrite --input=<path/to/source/code> --target=<exactly_one_target_from_the_list> Note You can only use a single target to run the transform overwrite command. 2.3.4.2. Rules You can use the rules subcommand of the transform command to convert mta XML rules to analyzer-lsp YAML rules. To covert rules, the rules subcommand uses the windup-shim tool. Note analyzer-lsp evaluates the rules for the providers and determines rule matches. Convert XML rules to YAML Usage: mta-cli transform rules [flags] Flags: -h, --help help for rules -i, --input stringArray path to XML rule file(s) or directory -o, --output string path to output directory Global Flags: --log-level int log level (default 5) To run transform rules on application source code, run the following: mta-cli transform rules --input=<path/to/xmlrules> --output=<path/to/output/dir> Usage example Get an example application to transform source code. View the available OpenRewrite recipes. mta-cli transform openrewrite --list-targets Run a recipe on the example application. mta-cli transform openrewrite --input=<path-to/jakartaee-duke> --target=jakarta-imports Inspect the jakartaee-duke application source code diff to see the transformation. 2.3.4.3. Available OpenRewrite recipes Table 2.1. Available OpenRewrite recipes Migration path Purpose rewrite.configLocation activeRecipes Java EE to Jakarta EE Replace import of javax packages with equivalent jakarta packages Replace javax artifacts, declared within pom.xml files, with the jakarta equivalents <MTA_HOME>/rules/openrewrite/jakarta \ /javax/imports/rewrite.yml org.jboss.windup.JavaxToJakarta Java EE to Jakarta EE Rename bootstrapping files <MTA_HOME>/rules/openrewrite/jakarta \ /javax/bootstrapping/rewrite.yml org.jboss.windup.jakarta.javax. \ BootstrappingFiles Java EE to Jakarta EE Transform persistence.xml configuration <MTA_HOME>/rules/openrewrite/jakarta \ /javax/xml/rewrite.yml org.jboss.windup.javax-jakarta. \ PersistenceXML Spring Boot to Quarkus Replace spring.jpa.hibernate.ddl-auto property within files matching application*.properties <MTA_HOME>/rules/openrewrite/quarkus \ /springboot/properties/rewrite.yml org.jboss.windup.sb-quarkus.Properties 2.4. Accessing reports When you run the Migration Toolkit for Applications, a report is generated in the <OUTPUT_REPORT_DIRECTORY> that you specify using the --output argument in the command line. The output directory contains the following files and subdirectories: Procedure Obtain the path of the index.html file of your report from the output that appears after you run MTA: Open the index.html file by using a browser. The generated report is displayed. 2.5. Analyzing multi-language applications with CLI Starting from MTA 7.1.0, you can run the application analysis on applications written in multiple languages. You can perform the analysis either of the following ways: Select the supported language provider to run the analysis for. Override the existing supported language provider with your own unsupported language provider and run the analysis for this unsupported provider. 2.5.1. Analyzing a multi-language application for the selected supported language provider When analyzing a multi-language application with Migration Toolkit for Applications (MTA) CLI, you can explicitly set a supported language provider according to your application language to run the analysis for. Prerequisites You are running the latest version of MTA CLI. Procedure List language providers supported for the analysis: USD mta-cli analyze --list-providers Run the application analysis for the selected language provider: USD mta-cli analyze --input <_path_to_the_source_repository_> --output <_path_to_the_output_directory_> --provider <_language_provider_> --rules <_path_to_custom_rules_> Note that if you do not set the --provider option, the analysis might fail because it detects unsupported providers. The analysis will complete without --provider only if all discovered providers are supported. 2.5.2. Analyzing a multi-language application for an unsupported language provider When analyzing a multi-language application with Migration Toolkit for Applications (MTA) CLI, you can run the analysis for an unsupported language provider. To do so, you must override an existing supported language provider with your own unsupported language provider by using the --override-provider-settings option. Important You must create a configuration file for your unsupported language provider before overriding the supported provider. Prerequisites You created a configuration file for your unsupported language provider. Procedure Override an existing supported language provider with your unsupported provider: USD mta-cli analyze --provider-override <path_to_configuration_file> --output=<path_to_the_output_directory> --rules <path_to_custom_rules> | [
"podman login registry.redhat.io",
"Username: <username> Password: <***********>",
"podman cp USD(podman create registry.redhat.com/mta-toolkit/mta-mta-cli-rhel9:{ProductVersion}):/usr/local/bin/mta-cli ./",
"PS C:\\Users\\<your_user_name>> Enable-WindowsOptionalFeature -Online ` -FeatureName Microsoft-Hyper-V-All",
"PS C:\\Users\\<your_user_name>> Enable-WindowsOptionalFeature -Online ` -FeatureName Containers",
"PS C:\\Users\\<your_user_name>> mkdir C:\\Users\\<your_user_name>\\MTA",
"PS C:\\Users\\<your_user_name>> cd C:\\Users\\<your_user_name>\\Downloads",
"PS C:\\Users\\<your_user_name>> Expand-Archive ` -Path \"{ProductShortNameLower}-{ProductVersion}-cli-windows.zip\" ` -DestinationPath \"C:\\Users\\<your_user_name>\\MTA\"",
"PS C:\\Users\\<your_user_name>> docker version",
"Client: Version: 27.0.3 API version: 1.46 Go version: go1.21.11 Git commit: 7d4bcd8 Built: Sat Jun 29 00:03:32 2024 OS/Arch: windows/amd64 1 Context: desktop-windows Server: Docker Desktop 4.32.0 (157355) Engine: Version: 27.0.3 API version: 1.46 (minimum version 1.24) Go version: go1.21.11 Git commit: 662f78c Built: Sat Jun 29 00:02:13 2024 OS/Arch: windows/amd64 2 Experimental: false",
"PS C:\\Users\\<your_user_name>> USDenv:PODMAN_BIN=\"C:\\Windows\\system32\\docker.exe\"",
"PS C:\\Users\\<your_user_name>> USDenv:DOTNET_PROVIDER_IMG=\"quay.io/konveyor/dotnet-external-provider:v0.5.0\"",
"PS C:\\Users\\<your_user_name>> USDenv:RUNNER_IMG=\"quay.io/konveyor/kantra:v0.5.0\"",
"podman login registry.redhat.io",
"Username: <registry_service_account_username> Password: <registry_service_account_password>",
"Login Succeeded!",
"podman pull registry.redhat.io/mta/mta-cli-rhel9:7.1.0",
"podman pull registry.redhat.io/mta/mta-java-external-provider-rhel9:7.1.0",
"podman pull registry.redhat.io/mta/mta-dotnet-external-provider-rhel9:7.1.0",
"podman save <image> -o <my_image.image>",
"podman load --input <my_image.image>",
"Fatal glibc error: CPU does not support x86-64-v2",
"PODMAN_BIN=/usr/local/bin/docker mta-cli analyze",
"<Docker Root Dir>=/usr/local/bin/docker mta-cli analyze",
"./mta-cli analyze --input <path_to_input> --output <path_to_output> --source <source_name> --target <target_source> \\",
"<MTA_HOME>/mta-cli analyze --input <path_to_jee-example-app-1.0.0.ear> --output <path_to_report_output> --source eap5 --target eap7 \\",
"<MTA_HOME>/mta-cli analyze --mode source-only --input <path_to_customer-management> --output <path_to_report_output> --target eap8",
"<MTA_HOME>/mta-cli analyze --input <path_to_jee-example-app-1.0.0.ear> --output <path_to_report_output> --target eap7",
"./{mta-cli} analyze --bulk --input=<path_to_input_A> --output=<path_to_output_ABC> --source <source_A> --target <target_A>",
"./{mta-cli} analyze --bulk --input=<path_to_input_B> --output=<path_to_output_ABC> --source <source_B> --target <target_B>",
"./{mta-cli} analyze --bulk --input=<path_to_input_C> --output=<path_to_output_ABC> --source <source_C> --target <target_C>",
"mta-cli analyze --input=<path_to_source_code> --output=<path_to_output_directory>",
"Analyze application source code Usage: mta-cli analyze [flags] Flags: --analyze-known-libraries Analyze known open-source libraries. --context-lines (int) Number of lines of source code to include in the output for each incident (default: `100`). -d, --dependency-folders (stringArray) Directory for dependencies. --enable-default-rulesets Run default rulesets with analysis (default: `true`). -h, --help Help for analyze. --http-proxy (string) HTTP proxy string URL. --https-proxy (string) HTTPS proxy string URL. --incident-selector (string) An expression to select incidents based on custom variables. Example: !package=io.demo.config-utils -i, --input (string) Path to application source code or a binary. --jaeger-endpoint (string) Jaeger endpoint to collect traces. --json-output Create analysis and dependency output as JSON. --list-sources List rules for available migration sources. --list-targets List rules for available migration targets. -l, --label-selector (string) Run rules based on specified label selector expression. --maven-settings (string) Path to the custom maven settings file to use. --overwrite Overwrite output directory. --skip-static-report Do not generate the static report. -m, --mode (string) Analysis mode, must be one of `full` or `source-only` (default: `full`). --no-proxy (string) Proxy-excluded URLs (relevant only with proxy). -o, --output (string) Path to the directory for analysis output. --overwrite Overwrite output directory. --rules (stringArray) Filename or directory containing rule files. --skip-static-report Do not generate the static report. -s, --source (string) Source technology to consider for analysis. To specify multiple sources, repeat the parameter: `--source <source_1> --source <source_2>` etc. -t, --target (string) Target technology to consider for analysis. To specify multiple targets, repeat the parameter: `--target <target_1> --target <target_2>` etc. Global Flags: --log-level uint32 Log level (default: 4). --no-cleanup Do not cleanup temporary resources.",
"mta-cli analyze --list-targets",
"mta-cli analyze --input=<path-to/example-applications/example-1> --output=<path-to-output-dir> --target=cloud-readiness",
"ls ./output/ -1 analysis.log dependencies.yaml dependency.log output.yaml static-report",
"Transform application source code or mta XML rules Usage: mta-cli transform [flags] mta-cli transform [command] Available Commands: openrewrite Transform application source code using OpenRewrite recipes rules Convert XML rules to YAML Flags: -h, --help help for transform Global Flags: --log-level uint32 log level (default 4) --no-cleanup do not clean up temporary resources Use \"mta-cli transform [command] --help\" for more information about a command.",
"Transform application source code using OpenRewrite recipes Usage: mta-cli transform openrewrite [flags] Flags: -g, --goal string target goal (default \"dryRun\") -h, --help help for openrewrite -i, --input string path to application source code directory -l, --list-targets list all available OpenRewrite recipes -s, --maven-settings string path to a custom maven settings file to use -t, --target string target openrewrite recipe to use. Run --list-targets to get a list of packaged recipes. Global Flags: --log-level uint32 log level (default 4) --no-cleanup do not clean up temporary resources",
"mta-cli transform openrewrite --input=<path/to/source/code> --target=<exactly_one_target_from_the_list>",
"Convert XML rules to YAML Usage: mta-cli transform rules [flags] Flags: -h, --help help for rules -i, --input stringArray path to XML rule file(s) or directory -o, --output string path to output directory Global Flags: --log-level int log level (default 5)",
"mta-cli transform rules --input=<path/to/xmlrules> --output=<path/to/output/dir>",
"mta-cli transform openrewrite --list-targets",
"mta-cli transform openrewrite --input=<path-to/jakartaee-duke> --target=jakarta-imports",
"<OUTPUT_REPORT_DIRECTORY>/ βββ index.html // Landing page for the report βββ <EXPORT_FILE>.csv // Optional export of data in CSV format βββ archives/ // Archives extracted from the application βββ mavenized/ // Optional Maven project structure βββ reports/ // Generated HTML reports βββ stats/ // Performance statistics",
"Report created: <OUTPUT_REPORT_DIRECTORY>/index.html Access it at this URL: file:///<OUTPUT_REPORT_DIRECTORY>/index.html",
"mta-cli analyze --list-providers",
"mta-cli analyze --input <_path_to_the_source_repository_> --output <_path_to_the_output_directory_> --provider <_language_provider_> --rules <_path_to_custom_rules_>",
"mta-cli analyze --provider-override <path_to_configuration_file> --output=<path_to_the_output_directory> --rules <path_to_custom_rules>"
] | https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/cli_guide/installing_and_running_the_cli |
Troubleshooting Guide | Troubleshooting Guide Red Hat Ceph Storage 8 Troubleshooting Red Hat Ceph Storage Red Hat Ceph Storage Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/troubleshooting_guide/index |
probe::nfs.fop.check_flags | probe::nfs.fop.check_flags Name probe::nfs.fop.check_flags - NFS client checking flag operation Synopsis nfs.fop.check_flags Values flag file flag | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfs-fop-check-flags |
Chapter 2. Checking services using IdM Healthcheck | Chapter 2. Checking services using IdM Healthcheck You can monitor services used by the Identity Management (IdM) server using the Healthcheck tool. For details, see Healthcheck in IdM . 2.1. Services Healthcheck test The Healthcheck tool includes a test to check if any IdM services is not running. This test is important because services which are not running can cause failures in other tests. Therefore, check that all services are running first. You can then check all other test results. To see all services tests, run ipa-healthcheck with the --list-sources option: You can find all services tested with Healthcheck under the ipahealthcheck.meta.services source: certmonger dirsrv gssproxy httpd ipa_custodia ipa_dnskeysyncd ipa_otpd kadmin krb5kdc named pki_tomcatd sssd Note Run these tests on all IdM servers when trying to discover issues. 2.2. Screening services using Healthcheck Follow this procedure to run a standalone manual test of services running on the Identity Management (IdM) server using the Healthcheck tool. The Healthcheck tool includes many tests, whose results can be shortened with: Excluding all successful test: --failures-only Including only services tests: --source=ipahealthcheck.meta.services Procedure To run Healthcheck with warnings, errors and critical issues regarding services, enter: A successful test displays empty brackets: If one of the services fails, the result can looks similarly to this example: Additional resources See man ipa-healthcheck . | [
"ipa-healthcheck --list-sources",
"ipa-healthcheck --source=ipahealthcheck.meta.services --failures-only",
"[ ]",
"{ \"source\": \"ipahealthcheck.meta.services\", \"check\": \"httpd\", \"result\": \"ERROR\", \"kw\": { \"status\": false, \"msg\": \"httpd: not running\" } }"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_idm_healthcheck_to_monitor_your_idm_environment/checking-services-using-idm-healthcheck_using-idm-healthcheck-to-monitor-your-idm-environment |
2.4.2. Monitoring Bandwidth | 2.4.2. Monitoring Bandwidth Monitoring bandwidth is more difficult than the other resources described here. The reason for this is due to the fact that performance statistics tend to be device-based, while most of the places where bandwidth is important tend to be the buses that connect devices. In those instances where more than one device shares a common bus, you might see reasonable statistics for each device, but the aggregate load those devices place on the bus would be much greater. Another challenge to monitoring bandwidth is that there can be circumstances where statistics for the devices themselves may not be available. This is particularly true for system expansion buses and datapaths [5] . However, even though 100% accurate bandwidth-related statistics may not always be available, there is often enough information to make some level of analysis possible, particularly when related statistics are taken into account. Some of the more common bandwidth-related statistics are: Bytes received/sent Network interface statistics provide an indication of the bandwidth utilization of one of the more visible buses -- the network. Interface counts and rates These network-related statistics can give indications of excessive collisions, transmit and receive errors, and more. Through the use of these statistics (particularly if the statistics are available for more than one system on your network), it is possible to perform a modicum of network troubleshooting even before the more common network diagnostic tools are used. Transfers per Second Normally collected for block I/O devices, such as disk and high-performance tape drives, this statistic is a good way of determining whether a particular device's bandwidth limit is being reached. Due to their electromechanical nature, disk and tape drives can only perform so many I/O operations every second; their performance degrades rapidly as this limit is reached. [5] More information on buses, datapaths, and bandwidth is available in Chapter 3, Bandwidth and Processing Power . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-resource-what-to-monitor-bandwidth |
Appendix C. Revision History | Appendix C. Revision History Revision History Revision 0.0-42 Fri Apr 28 2023 Lucie Varakova Added a known issue (Authentication and Interoperability). Revision 0.0-41 Tue Mar 02 2021 Lenka Spackova Updated a link to Upgrading from RHEL 6 to RHEL 7 . Fixed CentOS Linux name. Revision 0.0-40 Tue Apr 28 2020 Lenka Spackova Updated information about in-place upgrades. Revision 0.0-39 Wed Feb 12 2020 Jaroslav Klech Provided a complete kernel version to Architectures and New Features chapters. Revision 0.0-38 Mon Oct 07 2019 Jiri Herrmann Clarified a Technology Preview note related to OVMF. Revision 0.0-37 Thu Sep 19 2019 Lenka Spackova Fixed a broken link in Overview. Revision 0.0-36 Wed Aug 21 2019 Lenka Spackova Added instructions on how to enable the Extras channel to the YUM 4 Technology Preview note (System and Subscription Management). Revision 0.0-35 Thu Aug 15 2019 Lenka Spackova Added a Technology Preview related to Azure M416v2 as a host (Virtualization). Revision 0.0-34 Tue Aug 06 2019 Lenka Spackova Updated deprecated packages. Revision 0.0-33 Thu Jul 15 2019 Jiri Herrmann Removed an unsupported virtualization feature. Revision 0.0-32 Thu Jul 11 2019 Lenka Spackova Updated Architectures. Revision 0.0-31 Thu Jun 13 2019 Lenka Spackova Updated information regarding DIF/DIX support (Storage). Updated Deprecated Functionality with information about target mode in Software FCoE and Fibre Channel. Revision 0.0-30 Tue Jun 11 2019 Lenka Spackova Added a full support note related to Memory Mode for Optane DC Persistent Memory (Hardware Enablement). Revision 0.0-29 Mon Jun 03 2019 Lenka Spackova Updated Deprecated Functionality. Revision 0.0-28 Thu May 30 2019 Lenka Spackova Updated Overview with additional resources. Extended in-place upgrade information with upgrades from RHEL 7 to RHEL 8. Revision 0.0-27 Wed May 29 2019 Lenka Spackova Added a known issue related to ksh (Compiler and Tools). Updated Deprecated Functionality. Revision 0.0-26 Mon May 13 2019 Lenka Spackova Added a known issue related to freeradius upgrade (Networking). Revision 0.0-25 Sun Apr 28 2019 Lenka Spackova Improved wording of a Technology Preview feature description (File Systems). Revision 0.0-24 Thu Apr 04 2019 Lenka Spackova Fixed an XFS-related command in a feature description (File Systems). Revision 0.0-23 Wed Mar 13 2019 Lenka Spackova Added a ReaR known issue related to RHBA-2019:0498 (Servers and Services). Revision 0.0-22 Tue Feb 19 2019 Vladimir Slavik Added further release notes for installer and compilers and tools. Revision 0.0-21 Mon Feb 04 2019 Lenka Spackova Improved structure of the book. Revision 0.0-20 Tue Jan 21 2019 Filip Hanzelka Updated Authentication and Interoperability in the Known Issues section. Revision 0.0-19 Tue Jan 08 2019 Lenka Spackova Updated NVMe/FC limitations in RHEL. Updated Deprecated Functionality. Revision 0.0-18 Fri Dec 07 2018 Lenka Spackova Corrections to existing descriptions (BZ#1578688, BZ#1649408, BZ#1584753). Revision 0.0-17 Thu Nov 29 2018 Lenka Spackova Added Podman to Overview. Added a new feature (Networking). Added a known issue (Networking). Updated a feature related to booting from an iSCSI device. Revision 0.0-16 Wed Nov 21 2018 Lenka Spackova Added a known issue (Kernel). Revision 0.0-15 Fri Nov 16 2018 Lenka Spackova Added a known issue (Servers and Services). Revision 0.0-14 Thu Nov 15 2018 Lenka Spackova Minor updates to Deprecated Functionality and Known Issues. Revision 0.0-13 Tue Nov 13 2018 Lenka Spackova Added a known issue (Servers and Services). Revision 0.0-12 Mon Nov 12 2018 Lenka Spackova Added a known issue (Security). Fixed an external link. Revision 0.0-11 Fri Nov 09 2018 Lenka Spackova Additions to Deprecated Functionality, Compiler and Tools, Desktop, and other minor updates. Revision 0.0-10 Tue Nov 06 2018 Lenka Spackova Fixed wording regarding NVMe/FC limitations in RHEL. Revision 0.0-9 Mon Nov 05 2018 Lenka Spackova Updated Deprecated Functionality. Moved eBPF to the Kernel chapter. Revision 0.0-8 Fri Nov 02 2018 Lenka Spackova Updated NVMe/FC-related notes. Updated Deprecated Functionality. Other various additions and updates. Revision 0.0-7 Tue Oct 30 2018 Lenka Spackova Release of the Red Hat Enterprise Linux 7.6 Release Notes. Revision 0.0-0 Wed Aug 22 2018 Lenka Spackova Release of the Red Hat Enterprise Linux 7.6 Beta Release Notes. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/appe-7.6_release_notes-revision_history |
Chapter 1. Getting Started with Data Grid CLI | Chapter 1. Getting Started with Data Grid CLI The command line interface (CLI) lets you remotely connect to Data Grid Server to access data and perform administrative functions. Complete the following procedures to learn basic CLI usage such as creating users, connecting to Data Grid, and navigating resources. 1.1. Creating Data Grid users Add credentials to authenticate with Data Grid Server deployments through Hot Rod and REST endpoints. Before you can access the Data Grid Console or perform cache operations you must create at least one user with the Data Grid command line interface (CLI). Tip Data Grid enforces security authorization with role-based access control (RBAC). Create an admin user the first time you add credentials to gain full ADMIN permissions to your Data Grid deployment. Prerequisites Download and install Data Grid Server. Procedure Open a terminal in USDRHDG_HOME . Create an admin user with the user create command. bin/cli.sh user create admin -p changeme Tip Run help user from a CLI session to get complete command details. Verification Open user.properties and confirm the user exists. Note Adding credentials to a properties realm with the CLI creates the user only on the server instance to which you are connected. You must manually synchronize credentials in a properties realm to each node in the cluster. 1.1.1. Granting roles to users Assign roles to users and grant them permissions to perform cache operations and interact with Data Grid resources. Tip Grant roles to groups instead of users if you want to assign the same role to multiple users and centrally maintain their permissions. Prerequisites Have ADMIN permissions for Data Grid. Create Data Grid users. Procedure Create a CLI connection to Data Grid. Assign roles to users with the user roles grant command, for example: Verification List roles that you grant to users with the user roles ls command. 1.1.2. Adding users to groups Groups let you change permissions for multiple users. You assign a role to a group and then add users to that group. Users inherit permissions from the group role. Note You use groups as part of a property realm in the Data Grid Server configuration. Each group is a special type of user that also requires a username and password. Prerequisites Have ADMIN permissions for Data Grid. Create Data Grid users. Procedure Create a CLI connection to Data Grid. Use the user create command to create a group. Specify a group name with the --groups argument. Set a username and password for the group. List groups. Grant a role to the group. List roles for the group. Add users to the group one at a time. Verification Open groups.properties and confirm the group exists. 1.1.3. Data Grid user roles and permissions Data Grid includes several roles that provide users with permissions to access caches and Data Grid resources. Role Permissions Description admin ALL Superuser with all permissions including control of the Cache Manager lifecycle. deployer ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR, CREATE Can create and delete Data Grid resources in addition to application permissions. application ALL_READ, ALL_WRITE, LISTEN, EXEC, MONITOR Has read and write access to Data Grid resources in addition to observer permissions. Can also listen to events and execute server tasks and scripts. observer ALL_READ, MONITOR Has read access to Data Grid resources in addition to monitor permissions. monitor MONITOR Can view statistics via JMX and the metrics endpoint. Additional resources org.infinispan.security.AuthorizationPermission Enum Data Grid configuration schema reference 1.2. Connecting to Data Grid Servers Establish CLI connections to Data Grid. Prerequisites Add user credentials and have at least one running Data Grid server instance. Procedure Open a terminal in USDRHDG_HOME . Start the CLI. Linux: Microsoft Windows: Run the connect command and enter your username and password when prompted. Data Grid Server on the default port of 11222 : Data Grid Server with a port offset of 100 : 1.3. Navigating CLI Resources The Data Grid CLI exposes a navigable tree that allows you to list, describe, and manipulate Data Grid cluster resources. Tip Press the tab key to display available commands and options. Use the -h option to display help text. When you connect to a Data Grid cluster, it opens in the context of the default cache container. Use ls to list resources. Use cd to navigate the resource tree. Use describe to view information about resources. { "name" : "default", "version" : "xx.x.x-FINAL", "cluster_name" : "cluster", "coordinator" : true, "cache_configuration_names" : [ "org.infinispan.REPL_ASYNC", "___protobuf_metadata", "org.infinispan.DIST_SYNC", "org.infinispan.LOCAL", "org.infinispan.INVALIDATION_SYNC", "org.infinispan.REPL_SYNC", "org.infinispan.SCATTERED_SYNC", "org.infinispan.INVALIDATION_ASYNC", "org.infinispan.DIST_ASYNC" ], "physical_addresses" : "[192.0.2.0:7800]", "coordinator_address" : "<hostname>", "cache_manager_status" : "RUNNING", "created_cache_count" : "1", "running_cache_count" : "1", "node_address" : "<hostname>", "cluster_members" : [ "<hostname1>", "<hostname2>" ], "cluster_members_physical_addresses" : [ "192.0.2.0:7800", "192.0.2.0:7801" ], "cluster_size" : 2, "defined_caches" : [ { "name" : "mycache", "started" : true }, { "name" : "___protobuf_metadata", "started" : true } ] } 1.3.1. CLI Resources The Data Grid CLI exposes different resources to: create, modify, and manage local or clustered caches. perform administrative operations for Data Grid clusters. Cache Resources caches Data Grid cache instances. The default cache container is empty. Use the CLI to create caches from templates or infinispan.xml files. counters Strong or Weak counters that record the count of objects. configurations Data Grid configurations. schemas Protocol Buffers (Protobuf) schemas that structure data in the cache. tasks Remote tasks creating and managing Data Grid cache definitions. Cluster Resources containers Cache containers on the Data Grid cluster. cluster Lists Data Grid Servers joined to the cluster. server Resources for managing and monitoring Data Grid Servers. 1.4. Shutting down Data Grid Server Stop individually running servers or bring down clusters gracefully. Procedure Create a CLI connection to Data Grid. Shut down Data Grid Server in one of the following ways: Stop all nodes in a cluster with the shutdown cluster command, for example: This command saves cluster state to the data folder for each node in the cluster. If you use a cache store, the shutdown cluster command also persists all data in the cache. Stop individual server instances with the shutdown server command and the server hostname, for example: Important The shutdown server command does not wait for rebalancing operations to complete, which can lead to data loss if you specify multiple hostnames at the same time. Tip Run help shutdown for more details about using the command. Verification Data Grid logs the following messages when you shut down servers: 1.4.1. Shutdown and restart of Data Grid clusters Prevent data loss and ensure consistency of your cluster by properly shutting down and restarting nodes. Cluster shutdown Data Grid recommends using the shutdown cluster command to stop all nodes in a cluster while saving cluster state and persisting all data in the cache. You can use the shutdown cluster command also for clusters with a single node. When you bring Data Grid clusters back online, all nodes and caches in the cluster will be unavailable until all nodes rejoin. To prevent inconsistencies or data loss, Data Grid restricts access to the data stored in the cluster and modifications of the cluster state until the cluster is fully operational again. Additionally, Data Grid disables cluster rebalancing and prevents local cache stores purging on startup. During the cluster recovery process, the coordinator node logs messages for each new node joining, indicating which nodes are available and which are still missing. Other nodes in the Data Grid cluster have the view from the time they join. You can monitor availability of caches using the Data Grid Console or REST API. However, in cases where waiting for all nodes is not necessary nor desired, it is possible to set a cache available with the current topology. This approach is possible through the CLI, see below, or the REST API. Important Manually installing a topology can lead to data loss, only perform this operation if the initial topology cannot be recreated. Server shutdown After using the shutdown server command to bring nodes down, the first node to come back online will be available immediately without waiting for other members. The remaining nodes join the cluster immediately, triggering state transfer but loading the local persistence first, which might lead to stale entries. Local cache stores configured to purge on startup will be emptied when the server starts. Local cache stores marked as purge=false will be available after a server restarts but might contain stale entries. If you shutdown clustered nodes with the shutdown server command, you must restart each server in reverse order to avoid potential issues related to data loss and stale entries in the cache. For example, if you shutdown server1 and then shutdown server2 , you should first start server2 and then start server1 . However, restarting clustered nodes in reverse order does not completely prevent data loss and stale entries. | [
"bin/cli.sh user create admin -p changeme",
"cat server/conf/users.properties admin=scram-sha-1\\:BYGcIAwvf6b",
"user roles grant --roles=deployer katie",
"user roles ls katie [\"deployer\"]",
"user create --groups=developers developers -p changeme",
"user ls --groups",
"user roles grant --roles=application developers",
"user roles ls developers",
"user groups john --groups=developers",
"cat server/conf/groups.properties",
"bin/cli.sh",
"bin\\cli.bat",
"[disconnected]> connect",
"[disconnected]> connect 127.0.0.1:11322",
"[//containers/default]>",
"[//containers/default]> ls caches counters configurations schemas tasks",
"cd caches",
"describe",
"{ \"name\" : \"default\", \"version\" : \"xx.x.x-FINAL\", \"cluster_name\" : \"cluster\", \"coordinator\" : true, \"cache_configuration_names\" : [ \"org.infinispan.REPL_ASYNC\", \"___protobuf_metadata\", \"org.infinispan.DIST_SYNC\", \"org.infinispan.LOCAL\", \"org.infinispan.INVALIDATION_SYNC\", \"org.infinispan.REPL_SYNC\", \"org.infinispan.SCATTERED_SYNC\", \"org.infinispan.INVALIDATION_ASYNC\", \"org.infinispan.DIST_ASYNC\" ], \"physical_addresses\" : \"[192.0.2.0:7800]\", \"coordinator_address\" : \"<hostname>\", \"cache_manager_status\" : \"RUNNING\", \"created_cache_count\" : \"1\", \"running_cache_count\" : \"1\", \"node_address\" : \"<hostname>\", \"cluster_members\" : [ \"<hostname1>\", \"<hostname2>\" ], \"cluster_members_physical_addresses\" : [ \"192.0.2.0:7800\", \"192.0.2.0:7801\" ], \"cluster_size\" : 2, \"defined_caches\" : [ { \"name\" : \"mycache\", \"started\" : true }, { \"name\" : \"___protobuf_metadata\", \"started\" : true } ] }",
"[//containers/default]> ls caches counters configurations schemas tasks",
"[hostname@cluster/]> ls containers cluster server",
"shutdown cluster",
"shutdown server <my_server01>",
"ISPN080002: Data Grid Server stopping ISPN000080: Disconnecting JGroups channel cluster ISPN000390: Persisted state, version=<USDversion> timestamp=YYYY-MM-DDTHH:MM:SS ISPN080003: Data Grid Server stopped"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/using_the_data_grid_command_line_interface/getting-started |
Configuring Capsules with a load balancer | Configuring Capsules with a load balancer Red Hat Satellite 6.16 Distribute load among Capsules Red Hat Satellite Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/configuring_capsules_with_a_load_balancer/index |
Chapter 1. Shenandoah garbage collector | Chapter 1. Shenandoah garbage collector Shenandoah is the low pause time garbage collector (GC) that reduces GC pause times by performing more garbage collection work concurrently with the running Java program. Concurrent Mark Sweep garbage collector (CMS) and G1, default garbage collector for Red Hat build of OpenJDK 11 perform concurrent marking of live objects. Shenandoah adds concurrent compaction. Shenandoah also reduces GC pause times by compacting objects concurrently with running Java threads. Pause times with Shenandoah are independent of the heap size, meaning you will have consistent pause time whether your heap is 200 MB or 200 GB. Shenandoah is an algorithm for applications which require responsiveness and predictable short pauses. Additional resources For more information about the Shenandoah garbage collector, see Shenandoah GC in the Oracle OpenJDK documentation. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/using_shenandoah_garbage_collector_with_red_hat_build_of_openjdk_11/shenandoah-gc-overview |
Chapter 4. Evaluating security risks | Chapter 4. Evaluating security risks Red Hat Advanced Cluster Security for Kubernetes assesses risk across your entire environment and ranks your running deployments according to their security risk. It also provides details about vulnerabilities, configurations, and runtime activities that require immediate attention. 4.1. Risk view The Risk view lists all deployments from all clusters, sorted by a multi-factor risk metric based on policy violations, image contents, deployment configuration, and other similar factors. Deployments at the top of the list present the most risk. The Risk view shows list of deployments with following attributes for each row: Name : The name of the deployment. Created : The creation time of the deployment. Cluster : The name of the cluster where the deployment is running. Namespace : The namespace in which the deployment exists. Priority : A priority ranking based on severity and risk metrics. In the Risk view, you can: Select a column heading to sort the violations in ascending or descending order. Use the filter bar to filter violations. Create a new policy based on the filtered criteria. To view more details about the risks for a deployment, select a deployment in the Risk view. 4.1.1. Opening the risk view You can analyze all risks in the Risk view and take corrective action. Procedure Go to the RHACS portal and select Risk from the navigation menu. 4.2. Creating a security policy from the risk view While evaluating risks in your deployments in the Risk view, when you apply local page filtering, you can create new security policies based on the filtering criteria you are using. Procedure Go to the RHACS portal and select Risk from the navigation menu. Apply local page filtering criteria that you want to create a policy for. Select New Policy and fill in the required fields to create a new policy. 4.2.1. Understanding how Red Hat Advanced Cluster Security for Kubernetes transforms the filtering criteria into policy criteria When you create new security policies from the Risk view, based on the filtering criteria you use, not all criteria are directly applied to the new policy. Red Hat Advanced Cluster Security for Kubernetes converts the Cluster , Namespace , and Deployment filters to equivalent policy scopes. Local page filtering on the Risk view combines the search terms by using the following methods: Combines the search terms within the same category with an OR operator. For example, if the search query is Cluster:A,B , the filter matches deployments in cluster A or cluster B . Combines the search terms from different categories with an AND operator. For example, if the search query is Cluster:A+Namespace:Z , the filter matches deployments in cluster A and in namespace Z . When you add multiple scopes to a policy, the policy matches violations from any of the scopes. For example, if you search for (Cluster A OR Cluster B) AND (Namespace Z) it results in two policy scopes, (Cluster=A AND Namespace=Z) OR (Cluster=B AND Namespace=Z) . Red Hat Advanced Cluster Security for Kubernetes drops or modifies filters that do not directly map to policy criteria and reports the dropped filters. The following table lists how the filtering search attributes map to the policy criteria: Search attribute Policy criteria Add Capabilities Add Capabilities Annotation Disallowed Annotation CPU Cores Limit Container CPU Limit CPU Cores Request Container CPU Request CVE CVE CVE Published On β Dropped CVE Snoozed β Dropped CVSS CVSS Cluster β³ Converted to scope Component Image Component (name) Component Version Image Component (version) Deployment β³ Converted to scope Deployment Type β Dropped Dockerfile Instruction Keyword Dockerfile Line (key) Dockerfile Instruction Value Dockerfile Line (value) Drop Capabilities β Dropped Environment Key Environment Variable (key) Environment Value Environment Variable (value) Environment Variable Source Environment Variable (source) Exposed Node Port β Dropped Exposing Service β Dropped Exposing Service Port β Dropped Exposure Level Port Exposure External Hostname β Dropped External IP β Dropped Image β Dropped Image Command β Dropped Image Created Time Days since image was created Image Entrypoint β Dropped Image Label Disallowed Image Label Image OS Image OS Image Pull Secret β Dropped Image Registry Image Registry Image Remote Image Remote Image Scan Time Days since image was last scanned Image Tag Image Tag Image Top CVSS β Dropped Image User β Dropped Image Volumes β Dropped Label β³ Converted to scope Max Exposure Level β Dropped Memory Limit (MB) Container Memory Limit Memory Request (MB) Container Memory Request Namespace β³ Converted to scope Namespace ID β Dropped Pod Label β Dropped Port Port Port Protocol Protocol Priority β Dropped Privileged Privileged Process Ancestor Process Ancestor Process Arguments Process Arguments Process Name Process Name Process Path β Dropped Process Tag β Dropped Process UID Process UID Read Only Root Filesystem Read-Only Root Filesystem Secret β Dropped Secret Path β Dropped Service Account β Dropped Service Account Permission Level Minimum RBAC Permission Level Toleration Key β Dropped Toleration Value β Dropped Volume Destination Volume Destination Volume Name Volume Name Volume ReadOnly Writable Volume Volume Source Volume Source Volume Type Volume Type 4.3. Viewing risk details When you select a deployment in the Risk view, the Risk Details open in a panel on the right. The Risk Details panel shows detailed information grouped by multiple tabs. 4.3.1. Risk Indicators tab The Risk Indicators tab of the Risk Details panel explains the discovered risks. The Risk Indicators tab includes the following sections: Policy Violations : The names of the policies that are violated for the selected deployment. Suspicious Process Executions : Suspicious processes, arguments, and container names that the process ran in. Image Vulnerabilities : Images including total CVEs with their CVSS scores. Service Configurations : Aspects of the configurations that are often problematic, such as read-write (RW) capability, whether capabilities are dropped, and the presence of privileged containers. Service Reachability : Container ports exposed inside or outside the cluster. Components Useful for Attackers : Discovered software tools that are often used by attackers. Number of Components in Image : The number of packages found in each image. Image Freshness : Image names and age, for example, 285 days old . RBAC Configuration : The level of permissions granted to the deployment in Kubernetes role-based access control (RBAC). Note Not all sections are visible in the Risk Indicators tab. Red Hat Advanced Cluster Security for Kubernetes displays only relevant sections that affect the selected deployment. 4.4. Deployment Details tab The sections in the Deployment Details tab of the Deployment Risk panel provide more information so you can make appropriate decisions on how to address the discovered risk. 4.4.1. Overview section The Overview section shows details about the following: Deployment ID : An alphanumeric identifier for the deployment. Namespace : The Kubernetes or OpenShift Container Platform namespace in which the deployment exists. Updated : A timestamp with date for when the deployment was updated. Deployment Type : The type of deployment, for example, Deployment or DaemonSet . Replicas : The number of pods deployed for this deployment. Labels : The key-value labels attached to the Kubernetes or OpenShift Container Platform application. Cluster : The name of the cluster where the deployment is running. Annotations : The Kubernetes annotations for the deployment. Service Account : Represents an identity for processes that run in a pod. When a process is authenticated through a service account, it can contact the Kubernetes API server and access cluster resources. If a pod does not have an assigned service account, it gets the default service account. 4.4.2. Container configuration section The container configuration section shows details about the following: Image Name : The name of the image that is deployed. Resources CPU Request (cores) : The number of CPUs requested by the container. CPU Limit (cores) : The maximum number of CPUs the container can use. Memory Request (MB) : The memory size requested by the container. Memory Limit (MB) : The maximum amount of memory the container can use without being killed. Mounts Name : The name of the mount. Source : The path from where the data for the mount comes. Destination : The path to which the data for the mount goes. Type : The type of the mount. Secrets : The names of Kubernetes secrets used in the deployment, and basic details for secret values that are X.509 certificates. 4.4.3. Security context section The Security Context section shows details about the following: Privileged : Lists true if the container is privileged. 4.5. Process discovery tab The Process Discovery tab provides a comprehensive list of all binaries that have been executed in each container in your environment, summarized by deployment. The process discovery tab shows details about the following: Binary Name : The name of the binary that was executed. Container : The container in the deployment in which the process executed. Arguments : The specific arguments that were passed with the binary. Time : The date and time of the most recent time the binary was executed in a given container. Pod ID : The identifier of the pod in which the container resides. UID : The Linux user identity under which the process executed. Use the Process Name:<name> query in the filter bar to find specific processes. 4.5.1. Event timeline section The Event Timeline section in the Process Discovery tab provides an overview of events for the selected deployment. It shows the number of policy violations, process activities, and container termination or restart events. You can select Event Timeline to view more details. The Event Timeline modal box shows events for all pods for the selected deployment. The events on the timeline are categorized as: Process activities Policy violations Container restarts Container terminations The events appear as icons on a timeline. To see more details about an event, hold your mouse pointer over the event icon. The details appear in a tooltip. Click Show Legend to see which icon corresponds to which type of event. Select Export Download PDF or Export Download CSV to download the event timeline information. Select the Show All drop-down menu to filter which type of events are visible on the timeline. Click on the expand icon to see events separately for each container in the selected pod. All events in the timeline are also visible in the minimap control at the bottom. The minimap controls the number of events visible in the event timeline. You can change the events shown in the timeline by modifying the highlighted area on the minimap. To do this, decrease the highlighted area from left or right sides (or both), and then drag the highlighted area. Note When containers restart, Red Hat Advanced Cluster Security for Kubernetes: Shows information about container termination and restart events for up to 10 inactive container instances for each container in a pod. For example, for a pod with two containers app and sidecar , Red Hat Advanced Cluster Security for Kubernetes keeps activity for up to 10 app instances and up to 10 sidecar instances. Does not track process activities associated with the instances of the container. Red Hat Advanced Cluster Security for Kubernetes only shows the most recent execution of each (process name, process arguments, UID) tuple for each pod. Red Hat Advanced Cluster Security for Kubernetes shows events only for the active pods. Red Hat Advanced Cluster Security for Kubernetes adjusts the reported timestamps based on time reported by Kubernetes and the Collector. Kubernetes timestamps use second-based precision, and it rounds off the time to the nearest second. However, the Collector uses more precise timestamps. For example, if Kubernetes reports the container start time as 10:54:48 , and the Collector reports a process in that container started at 10:54:47.5349823 , Red Hat Advanced Cluster Security for Kubernetes adjusts the container start time to 10:54:47.5349823 . 4.6. Using process baselines You can minimize risk by using process baselining for infrastructure security. With this approach, Red Hat Advanced Cluster Security for Kubernetes first discovers existing processes and creates a baseline. Then it operates in the default deny-all mode and only allows processes listed in the baseline to run. Process baselines When you install Red Hat Advanced Cluster Security for Kubernetes, there is no default process baseline. As Red Hat Advanced Cluster Security for Kubernetes discovers deployments, it creates a process baseline for every container type in a deployment. Then it adds all discovered processes to their own process baselines. Process baseline states During the process discovery phase, all baselines are in an unlocked state. In an unlocked state: When Red Hat Advanced Cluster Security for Kubernetes discovers a new process, it adds that process to the process baseline. Processes do not show up as risks and do not trigger any violations. After an hour from when Red Hat Advanced Cluster Security for Kubernetes receives the first process indicator from a container in a deployment, it finishes the process discovery phase. At this point: Red Hat Advanced Cluster Security for Kubernetes stops adding processes to the process baselines. New processes that are not in the process baseline show up as risks, but they do not trigger any violations. To generate violations, you must manually lock the process baseline. In a locked state: Red Hat Advanced Cluster Security for Kubernetes stops adding processes to the process baselines. New processes that are not in the process baseline trigger violations. Independent of the locked or unlocked baseline state, you can always add or remove processes from the baseline. Note For a deployment, if each pod has multiple containers in it, Red Hat Advanced Cluster Security for Kubernetes creates a process baseline for each container type. For such a deployment, if some baselines are locked and some are unlocked, the baseline status for that deployment shows up as Mixed . 4.6.1. Viewing the process baselines You can view process baselines from the Risk view. Procedure In the RHACS portal, select Risk from the navigation menu. Select a deployment from the list of deployments in the default Risk view. Deployment details open in a panel on the right. In the Deployment details panel, select the Process Discovery tab. The process baselines are visible under the Spec Container Baselines section. 4.6.2. Adding a process to the baseline You can add a process to the baseline. Procedure In the RHACS portal, select Risk from the navigation menu. Select a deployment from the list of deployments in the default Risk view. Deployment details open in a panel on the right. In the Deployment details panel, select the Process Discovery tab. Under the Running Processes section, click the Add icon for the process you want to add to the process baseline. Note The Add icon is available only for the processes that are not in the process baseline. 4.6.3. Removing a process from the baseline You can remove a process from the baseline. Procedure In the RHACS portal, select Risk from the navigation menu. Select a deployment from the list of deployments in the default Risk view. Deployment details open in a panel on the right. In the Deployment details panel, select the Process Discovery tab. Under the Spec Container baselines section, click the Remove icon for the process you want to remove from the process baseline. 4.6.4. Locking and unlocking the process baselines You can Lock the baseline to trigger violations for all processes not listed in the baseline and Unlock the baseline to stop triggering violations. Procedure In the RHACS portal, select Risk from the navigation menu. Select a deployment from the list of deployments in the default Risk view. Deployment details open in a panel on the right. In the Deployment details panel, select the Process Discovery tab. Under the Spec Container baselines section: Click the Lock icon to trigger violations for processes that are not in the baseline. Click the Unlock icon to stop triggering violations for processes that are not in the baseline. | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/operating/evaluate-security-risks |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.